var/home/core/zuul-output/0000755000175000017500000000000015135704171014531 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135725240015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000361132715135725040020265 0ustar corecore wikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD  ?kYI_翪|mvşo#oVݏKf+ovpZjl>?xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]Z8HGU=y&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(կѶO:#'6RE'E3 */HAYk|z|ءPQgOJӚ:ƞŵ׉5'{#ޢ1c qw zǽ0 2mK:ȔsGdurWMF*֢v|EC#{usSMiI S/jﴍ8wPVC P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&ld\{ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"GypqI$6ʎ@lbx\<uV?.*E!qQ5m㎤9I͸,0E.ŊygcEl#L)(g4^atNbe7}v+7Zo>W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sٻ֦60G`̱E$9ulͥk](I8!J G8>Hz*h{vӊz%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|Yt#A/| 62$IL7M[Y[q2xH8 JE NB au)T܂;S䤽|7,)CfHCH#IY]tNWA̕uF&Ix.Tpׯnn|ޞʚ[Ưy.xF%ڄPw5fc=f짩Q{rhbԉ]eH'm%=X |hIO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6Ly Y.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w==!TlN3ӆv%#oV}N~ˊc,_,=COU C],kygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jVeE!EL2$%Ӧ{(gL pWkn\SDKIIKW4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:* +2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻfyrd-i-Iv#GL`,Oȃ1F\$' )䉳yg=#6c+#  =J`xV,)ޖEiWgii?\e%pf6>7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGhzRʊHބDnC{Q)J. hW UT oVh*c6q?Q-Ns%טCE?Ge먠MD"+3@'V]PXu/:*̀1җZ,{Oǔ6Jy%١oBbFM=$OQYꐙ^=Zza5a%פG,ϒPv3^KPbGVO'7daOU%tt!ƖRG9lhfd#]y=DFT8F}$RD<8 ].v\-v:8F+Mt|ga.!! р#ݴtӫߴ]vWͽ2]Q6Û͘`_}KnK"]p<)Xg '鸽= &Xu=y`g[#O"?5vg3gR(Җ}f`ӀSqUق0D L*2 5I bHb3Lh!ޒh7YJt*CyJÄFKK&GjC6/."6%>Ϗgxl!=3.l D[MTo&r:L@D+dˠUHs[hiҕ|֏G/GMvc*@k]ן;trȻpegg2ŝl1!aI%~`ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKF<չw"W,_4EqgMƒK6f/FXJRF>i XʽAQGwG%gCCY Hsc`% s8,A_R$קQM17h\EL#w@>o^kПy׏<:n:!d#n} t]2_KB,cB]i&ͺK Y1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սM^dˋf6Y bQEu_}G7Z/qCޯ'+HDy?\~Ȳ=sXअy{E|/yJg&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэoe:9[v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@m=Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~CժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `9/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ"4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPO-ʠƖ~+%ć4he:ա+Cbt Ŗz6HVsXS~+(f?TT)*q#ȱ^%u~-~|? 1eOȀ+ å} .[c' /P4Qz~j3 .-8NJ|!N9/|aJ+GJzApU]<:YO+OĤ'cp2\}X_+!<cu0:U[tp^鶟oYhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo15jN 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61_$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVb-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋" ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qr^Pm:~VM/!m8N$@"Yҫ{.9Uʍ0\jmvz_Z p&>;d9OUo_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;HC78NE@UMc8>`TvZ:}ػ޶%WK(l^XǞ1qI )qM /5~,_[V1beNfl9]*H.dj$%1HcTIyDUa2jSW%=i2jPh.]b#}~?GbLLǰE5{ YB=Ʉ_g}X$7}s*dp^!;:%_vxJF蔔?;uEa̸6,3"6By1Wگ+cKuqٖK??yc̭ ⊹tbf."30ޱ6LDw#&1j.,o|# ܮKt{~L7GL9B-*{48Ky#a! 5品aX3JPd& "O@ @o&ՠ%bj7)OPt֠*d?ZUB\P>]SYʋPtz^px[qt~- gm /u$)hP|(tڱǷ{AyyE>]0 tSwgD[8Y#y\3ɮxӛ=>^0gP4T_^2?}8F7BXS繸QS#*HL˅`_GBP}_uH * )m 4@R@wь! 9$B$<3oЅ>EI:=^Ÿ| A5'~9=5/?`H|5(<9% ԪmiMWם ļ-.GCCHRHIu[VMqt=zu >]D !$ɴ>f'r3qk6oJ3-.r$mn䏳r!}y(';DiP&x*+V\MQ4RьۛR";rۣ'@lHE|Hl LV`E|aFz1gV8&}VtEcDqe|\m硭= ABDHC"U#WȊhim]ĽNY= (n[X2\Zy5>TYArL&)>e q#XϕD϶.ȶOS:H|<I#Zeyp =ݿeQ QRs=nʪĊYPRw@$MHaZ6GBƼIC*%ۯEhvX×ړ$RaNy2#rLgCfRuɳ*iNL1l/ voimIjZs84AstJ$KJ/`: #y"a>yv9 XQyalɧ1ӝ]ade#K#3m4fLWR%Cs@6Wf/MyRѮ,KL2 *kL,WU!eSʛ4&hMD.;JrZ-ɨ^&ẋUцV܄-M`%6-:dYX[EZ)*pC;M%XES)>]Ck*>CIDlU.FO]kSDb2Z4Lyk=z*9_F D=~_W74]>]j .vtێp&˳Tm7ʾa, R]omD f@W~7uJdbX"w݋*i}d M l=^V>x˜mN\C Ƈ0s_Dfz <zWuPI{/"]a~,2[L6G0G`X 1YێTޚ4gjw30,#!SE5ū:nI~ofؐ9MsFS9mYs4lA"lT+:W=T0܀X`IgD|=<$ +9)rE۳θ>ZUEE/B]) ==+ u(+G$<?FAwk#7t4s>1@]3]R'ߚ_[6r3 >ܾP|9%ivef#H#Eg3㷿ߩ&"wj&,f̈;f1ufo ME(Cgy2Y!(΍coȀ`¸Jf[|ٝ3߽y.qwh wd3Mg'p'm30!)#ܹQ&w>#.rlT:ψلA= a6aEO™Mf5>Mo{ u# &賸Fg6{rh7cCfaz,'}qH݇03F3%hXfŘ)c3KSs̝ @=uf$Ǭ;Cޢwnc=,tZ,Ҕ0p) ĞSe\$צ50T'A!d$?b6sWN> TsZ>@]_\Q^kw"ڑeV' Ohm0p^L(x#o6Pt|A?MS}MR_M>W~]|G|fS6I>b$R-\|kJ:;SZ~LNʆ'IT$I?kqI}r.ڷ[pr \EN p7[U D ზZ~]E, 0T?!e}mHmYaP8{iG!l v5튢' Ǿc*,-aDYamC,~ yAKBAnBE<9f" T?fxE\C(&鞝L["vE[F=5ut6H;n(_x4ZQ5E\$'#Qs4Ib5:;f^3ABb+ . +!s˨3|qhZR縌I4 2,17Qe#۟~ɪXj+Q{tlл16j7@Q.QjnSW{j@B0}),FL40 6gMNFlvmeN?m\|<,K(etvQ mځ,RhAϠ3):%\^!o.ǃK=0~2Fo#n! )7`l&n,Qw  ~: Y=5lq@ J#v\\L#DlZN&TH{#f/4{fjh-"k $q40PO5S`&9$`Ol6-~u3No]!8(FKyneξd$Ã)~Ty:B"pgGeXd_zjUZ}zs nh.]w`vl<` @DM%7cI`ieP s+6JŴ86 E)k)ID`JfʮZt"uq`i-^"ɕʊ5le+H&QԍuGnc b7E$$g :]cꁲuxcRf`w@G]'h7{~6-X};_!cߧ9+au#/B |2ʕl<d+y6H-rNh1ltF':ـrZrzPIx[ѧ`d SQCpm9+ noʠuaKVSy>Ac݈!ۘdAs`%zy9@EifPv3mrtzk:Ip +~>>B%msM|75 k:|r,cX_# (oɟ|&=y;ὺ~2 7W-&W`I@lWyo VQRRS @KepTڢ08X)*9\}q`4Xn[|F]/noo(6h{g6sq$VjfO) G ňE1mbF}4`P ,f(wXA9^+}/'T W=MP2Y Tx.J|&s0^I> ] {nPtӯi0 q隆CRv<!k6cQ%6A8!0,rZH ߦ.])zkA8t9iFFj,J`JR0M0uB?lmjqH gt w$ 8tPՔLh뎧KQo jct i'|RZۤŽVjօ|p?mLY=f"KoQHC1WKu:L0_?{1݊D?$hkP@<%PY0M8_~/U<ߪkA鲠t Aҧ Jwn#*)ݡRkZtz}X[.UԺݹr],-BXi%e[ʗ[ʷ?MP| AŲb A *vTl!,:OQPg AeA-u}[- m!vBPYP A&˂[l/h4A K.gHOWB.HL xL||kF<{yr ñՃMVGnu>*>SIH=FeZ(j}Ĕ v܇Mp$$Ϩ]cX/w/]tYMCY=s4#(ʔ&r)U،ӵ+lEA<<}[_9FW._^1U>m4ϲ<x{b>>T ,L L<-): ; [hnY.8uTU9PXuCܚ#4Ou- " {GxҋGb ?xF(CY7{ : -iu,HT[}+GYc^ZBoHTԴ?|89DZ0 At5u0 ù%w˪~,~J¥.}ᭇ|xF(OuW0uQY*,UBV׵N/ ox2̋8z\<5̷bp2 b=u:NhE ${cS_g62|~oI<,~e/;++ߥe:_\y$h]ݳ(M)\rX',GQ6Vu^=%_+$t@%kj SV>w,#AO2&2KY6'30L X ;K9!1٫#W=F ңV2莩u'YRlzhr=02 ajR㙶,,gMfĉ.ɬoNp#䌆"AdD10r71%$Lc`6u!&{uCN%hv}J_U( eMP!S kٿ*ҫ rXhdas:n E"% nL2RD(IzlUSzHfI2ńSkXT 0`(g`hLuO$$6ꅰ@N \,- /e TA,<f0 Vh䈫kB)5TE9uph3I3`R@6A ATY8.76p$;bDQ55*g8?[2䝢v!*- DnhIgd+.,^#GyaJnac?FO=ɄfTd K8y{+AC5{]d)mBLk-&kanޘaiӞ:Kѣ~*%QV_rk{v۱ 7vzN}{FywZV^"֝0&rً V}β :UC?@6;`kCMgI0pՁ 5Np >?KǍe_vX}}8|Q 9+Wi}^ue.T~BT_"|bnĄv + :=dr$v$=,HQGʁ9ksYZPcCNPV(R1ۂt(Z54cOR ȩb ԰ƽK8?}9#M里tYUca ;+RKXߧΆɺ-j3<ҡ*Yb&^/e-:HƉzR_>T|bXGlxcl|=\y>Y:1o+mi{pmu,P쟫3WjY[ |4Yhr20yҀՕ}܂t4vUy=e7(da>@qO|-jkĔVi*K*&9iLHbnƶ_wxȻFdwj)p48*2˽b:ؙkU6[3F {x[v- tb fjVdcê͢Ŧ"i4N1TYHpaV8Tb-O9=*Mպ=\],Ym7)!qt@8ꐕE0}`:eɼ&šr 1pl?1SihGF[c , UF,ιHb_tIljY.Έ*VKP u4tv,IpmKq+o砃(W8TnU1(cTKVxVHG!Luw_Hp~V8Uc{ 1 N@(8w.4ΔR \.^,IBl%KR5T3\nHpJ8mak);xL1(IbeˊGigሣ@$2N vtBm8JJiZ+&),=:IoH3r-(O. x]4E&BP B!^9;6P)p8ֵK "İm|x\-[j0EO =$X 'f[I{d##xV 3ѷ螐 2YʠZ]3~~!Btձ_Xl-p8b$X0 QjR -<2)63Fқ~IhN6œc?jm'zgUU9*|quHöN fӋ 5iLsV*R1sZ:m5z4b/]Jf`+l0U>,M|'^&2Ga:EΑh\(W;~[خAV88)2Œgz yc^4#``İXHn^zZ4W龃5RUksB[ِHIgy5$8/tp<%Wp$gHi 뭀iyb$bھ8}|xL֚@d1A۫ Il̻e̕61eUȜը@zJ%XK){Vx4*1QBͪ> Q v/0b;UnVĔ`bDJkJ<>,L'gXСPrFpDm>w^j> ^q}k56jKF*,)At cİ{:;ǦƬs-1Xk:p'#i?sw!K҈060ՠs<:"y%lv !8Y;aԈYM3?-Ip5G3z \`=_@Q3 #$t);z1a}7.B)gzRET{"=9#ʘP3M Re~2"d,\bfv1y{m gDǻ{@_-nN:ovPk^x֋NKvPr-Nmp_AAK;`$ze$yNy+$/t!QebX:ˤSRLL6"F[q}E%$zb$MH\l !1e1H"GYAs4 1kFO*حYrM*'Vj)$8I~&;H ԃh*tvVAbQ )xo@!_YT/7 YmXUs]Cm-ڿN台-ϖDjK?r|`z}cr\KI !~FZj9"{o҇VRvu$[,3&tnb>_$8%U|:8xiznBHq:tN/$87*+Gݾ55ד_EhVBBzZ҄di'&!xNxrGcWk*o=VɜD-f2[B߀c: :WG#\ڱdkDպ_$ ftp5:OLFy O|j@vqz]e_ YL1*UnEW%y2@ȕ]X&rFc/L ݇ `$w;tnb?㭼5$8^Hyw8Dxcw$}n$8^r N<>SX ٳVRZf[ R1γ_o= \s'[:?Ӫ& V$r I#S2qcR+_zYoOJMD>^Wf 2KVA`D `$m6J((}̳'NC Yʠ^j[\mͬF&ęGweGjIJH1m zVBkvI䌖G*oOZ˯>"ε/R^Fi0>ibqZsL3TaL甀 ,`zjh}?*18"왻6%Dt<;zTבE1/N;[,1Fy.>xKm?c|M;gcL"Rx%ΡVPA`p&xe|\|Owɷ LUű{:.+Űtf^k&ULl@΅֪ ADQ92,REI%msecIpt)$`0o5?xM0[>*`n)Hp4KH]>9K'jGCA8Ć3z!||QyD'TIo Ď? ͊qq2!]ܚW&ՂIKf(KGK`~H<1ýf\$q%ǃcPT]4׻tdlP3sa_)jXKW/F8nxmcQLEMfՀk;UʹB&Vd(F]'oWwqM[;/AO>?) |Qo|_~ZLhE$좟 Wl֛ۺ-V{5O 7ofpi=*u˧ wv7{''tC\7sAzo7y ,([߃ػ6,UMv-7kl&0dx 3~?9LFWo /z?ލ(4"^4Vr2`fzY"(QyD"_nE8aYC*2 dޅQ/ͺ1bI?ľzn0s,6.' K{ULhLg);F܆>&NhBBU̡9)*wV\Kor7#vF%v%(*W0]YJÛ6+ꞹ=ѻ=aCWdVJhL%#zw@mWs6L^e~~S'W O#v NO1K g8ɝ*5)84< @e^,7 rGꢛgO@dr3EP3+YTƝv~79} 6d8w ?G'<&&av0gn/QOuܑe1W 2LL[,yib&jql LOhgph=19g^0x#Y?qŸ~蛀WUn\*h i `q`,Gp:9 Tt?ME ,x;V'[+c!JbR@lN6&cFEߖVҷI$D(JuKeD#qZLY\$.c. %(f)0jGJNWc@,Q5*XU"WQ]9,p86Ԑ1co-pj *.m5LS~Z/tN돂7S̜ͰWߋDIL`wryP FJ *r冤b_OV^zVW{/ }7/s3ǽwؕCq*TULXx?"O_)l<6*bdcmDs)ph%7]HF;Si~I㻷Ⱦ\^B8$^kԂkq:ɳh 1sf5 g%T _y%;_Z|uw{٫FJe F8EquxH!/ׯ `w)ƧfXw ^~U.kAW. R\G.\֣VF+$.T+3,h{HV0N94G(*T3[ڊw񦉆SڴY1h|GCmn&JQ sy/Ps~ (z^X\Wp }BjGaAJeEfVЫ|QQJvO\lf ͝][ !Ũ9M#5* $RFV]~`-w>rLP?0";Vӝ.-u:?o.\V 4a:t~T=؇a+CɮCBJɮE0fNv=r9]ZUv;55reٜBn&$+4I܈6q2nz~gYWO4iYF8{USx/޻;<\ܢߑ5.n5 ^8ϊ]?WmRSLڣƁ=A戶@h 3NzTDiU6?L@()w_BBMZFvF+^kbuѢE.+Q{t`-%b/ghD  Uڂ!&OUţy nx}=HL9?\è߾_'&'8R?%uђO~o· qY, +à zܣx7}REs@Ek]}: )ڔkϼrڞyI'A(E@ G+ el6oTS!l~t`Tg S'b0`$Q%|.l J#VI@HVl8EnFyKĂJUHiT$gnag@n4jݘ hUV1Z5SE m Bj¬VV5w%w 0^P!sr<]eB*Xüu%re6㫴L(s'V;)L;Y)ٺ 'ֵ[2i<+uX0nW UlD, ̬F8ry!.' VY^ʒ5?̒}Y&Xc0gTi|Qk: xLk?<%7 s( B2ק&ӊLw#Z]ɑeց-B%= 跬>^%%TU}#6+[f5P)S:*͏ki#W,ꡄG?@G^8!ˉݏp jk*?BJ_@UVn'}u9́'s5DH 7F9Ӈ(q4p?M(P};#\c?SXEȲҌ^;" =$!"->:URAj) KKUle.̇]˃Å&M,Db3\NG0j~3<4nE-#@")p{V)(hSXX(];̲q$:뢿, Ii#0a5O4e 3 (7>lzq $j̏G2/ppXH$8) ,ke(  qd4d4!#;ds쪒As=rIA`SvF I`(Z9# fX2vH2vxPXV9dKY\FC CyArQ0<:̍9pܚ4쨙5i&PbcÇ)V)uԼY@+E UN8ȋvTQ0* 7먡7|A2R;Hy{jI(Ӈ;7I#ÞZZW~ FRi_73 rg:E)/^ Q,v`˾{is N_E>ʗCjA~=b|J(EKp`XsKgdZDZG7bj׹E NGt~wr=K?  sg'CƏwsjH=Fw?nŽ&VkKf'zCL$6ҹigW^eL/9ln|>wZ>>으5>K7Ԕ&=_VF p׺{uDx=@|15ASRMà G)Z1D\J1J[5KZ`]g-$"N[Dkt+&%S*p*Vac}.T#-.Gv@ `qX9=X"+&"!04C1-RKP +zr_q] k$sk `sJ0pyb &2X0xx;^-W-H:Ql$:Va4󚮲hDmAVy6y Q!0D/ 9Mں*U ^$5(JA>8$HʄG,˅iϽ>9he[̅Z|Yƈ'lֿAőjDSc\LcF m4VJ*6q -0\,p"[ )钨$F.*dް9o)D{ZVIW#(o3+\f DI9ikZeLEƄK.kLĆԊt /\هEqaQmOr7r b*qZZ@3^9n<ôKePI\aR*PL᣿Zcl둋E[ v\)駽~s@H܂_Tuߌǵ<8yN8: ]yp7B?.A*m]P%5Cf4'V#3ABL HYsqIIf KZ@2dV2$`5 ր%8c9*X HE0lC ~*@Zs̵([+…%J-Hиbikk 3w%XW[ b7Xj"œ["rLLL.H8w!$G!E) G%1k`9SZp#-@zr`45v=2H29wDꟌLpz$~8V7TUʔhZU O{*G{-cFU Gg`JF/0 j:k .&>4nSBhxyF@s)XƔHP 9-8: TSj9|6h"`p=*d gyrvnI"yu}  (F%H8jQÉ60 $<NkERB䃾0ɲ=բk$&!SH1Qxy;&@nc&Oj[{cMً"I`l4EA96j$Nヌ3<-{FciE Hp~HfB- 0DHRkCQ!P,>)k2Y vW>5XH|:)9pn-D徦^`5naHV˧nW>'!y\||3zZr_m+^?6}Cz0E{S^[X4=7|verBb{> a,;rgŋZ/?_/4lFkBb :`ݑ8`P/ J}4+;?^n} |mϻiFٛ%}v/ޭQ(o ~9|Yx޳_JbҜ-L)jB;~,|Lo9Wrԍ ўƦ¡{ &g?'EݣOΊo@!YYNz?8bg0!ÈPIЂT|.L^܌3V|3 9'Pj!3x&Da cZ0qX'_sFTp tJrItJcvYDaOyJyŭ0ڥZ#ZXk%"[Oԙ#ީu`C#1ա&0`fpy(%U1iB q)lmFp2˄q1,L( q܃̄kk#9Gtmͱo6B&T0QWQT u\`<&i0ci5jݤHe4eaF5WlӭzULc ndB QϦJl!fnt>:){haaNZ2]mjS*rlZn;IT4(Afhkvݾ^NCLCJ0;_hVcԮ'H͖r# .6׍j-Ɛǧ LFFiXˎ5JCHx=ǸblQ'7d> v8nXcRuo%} g԰7)tkuWU#DuD3v!܎oOZ#\}EtLpCCi n '"$`uO. 7=h\<&i+;.$ňfqqtBS~p J D"ěs(HqZ`*J8\PvD6'hlN+hY4 J}TҠKD׻[i`wQk2(>T}"@}%YRsfI>·M[v0V|zI#߀THQgS&Xś(1Ō)7:w Xjy)i&,b3#\7}ZM<*iД#MYGYݧsՕ03%7pfּ@_hvy$kW֧ Lp.4Vq /_G+cRkBDTJnkR+pxzzQl& ȖVr8XuBT ,L:Q>P\:_Ep^W'/2*uRǀd2&sO="u' u^FM_b yv9 CEqd 0B7?|9˻3ia`34ΐmFAT.`ZmA|/\| 0״_v_Fg0KطVcjO Qc5(oO_5`{,>_͆φ朓n>6}zZ"d Yn@n]8xg9S7)Z,%xDeH?01trA-*69 %Z&來\PN!n.ѡ'x{!pl 15|: `mdmG#dִgwkM5sS}A(0s"'Zw[-4c xJ+]K5*:guU(fּzS`4$;vWAk3ڑiSV\eEyڥc4<xt3Ra'5у ו7V.\]BE5: 61 gɧpX^_8S*<Wج+ b4׬n9'hOdۏ5pH}K8n#DG%\>n@ֲR`j6)O#ì\\M BOtN ޯ}y 8QdIƧɳ48!Dk}SxjIyt^\p^\z7S/ޒ<+ե4DZy7/ _SOͅά\dUR{Z5R^RZ0Mڱp#p1 exɚl6o}r!\HVc;2TXz7EZw%]0 etpM燷>şTP• Vv>|[H/og\6Zlpg܅x4dS%&e07Q2y{zQmW󣳫|Guv|܂)QZjz*@7^TKiAJKQĎ6w cKPE`oY,qD!h Z4~9M{/0UD TГl:O e,Cej@^d>է"ɭv:1xC xb2\eN?AL0'C ?!<XJ|Єd.4@k (EZ5S 5}i#n*xf+vK|XsY69h_^jDIu xl<0bT O@t>& A0ap$.OzHhS)"!.fK}/jo< wH/Bb0BZ8γ .yqo7~`/F|6H2w[ 圧ͦaְ^ "3z6]-8;j+ Bt1Ǜqۑw[y⊺3M(`EV?(|JƣpgoVtU@.A_Ub_/VDL$0L2J+F#B9o,5aL*ޔ_5!)O-&כhL=/MʹS(_ś׋oHe to2hy'' 6+ü&%=WΡ8Q5"ʭh%kĊJDbj%ILc!nD DclSʹj#}i}+؆QM'c}jCd2y[X-ѕU"bVJEdka g 'S(K9u*⫱kW(ԸHPyǼ\AYD/wGN6)|JW~=gO 6W?)xn\SMA6UMAa o@>F#h|rE~T$N\nSp-h0`P8$3h_Mߢ_↽E/E4} %!UX HƣU\-g8>t>p6مCQ bŮOgC؇_wȝS=( RD%6JҠdfJzΣ/&+θ[lue'e*8[&rJjd O( ANLPe;c2u sr'/c<:x;0OO]h 8Xd2yF,vD9up^ܥmjZdzĈ$i*ej C;_2&kޖMFөk LQB,u eΥZ9nh;,Gd9%dg,McNV}x.MJ@Hk3R:^t5Z2pBoC?;B! *3R jҥʣJ+\Unk5oZ;Xu>GYRRLOr)A6Zx*8u^U; sN2BHbgH/P>UV\5sV}ĚlN(Bp8O5-tk*|׼%_9&1 @z%"ZA|6eOe6&æLx{^ (<#drbM Λw;M$&;Y`f6pOۘݶRӠnrIA'@;y2Cj!Ժ ĩf3#q3+Jj'i s(+C`딢BJ7qnO^_B\YHQ%%TfXoYf)5TYSEsnOƼ?4~E!J2QeږKBU%%߬Հǻk5>ME<ݓ;UMӢ /2/ڬ[ Rܯi'uON9@ V gX]4owh~>֠伖O7TZ1i7V"e4. YcWs!|\G#~)k*8P^HC)꿌xHGH(mݷ)`ԜZǼ@N;ϐ$f4[y_BS,Oy'b$ȕL FVG޹֘q9FG0T! 4&EgKc25 ?S#=A;fws'*ݜ}rs 790qs( jtRz<8 ѷ)˳OSFhNzLA)5F^;jE5jie}O[4u:uXgE94t)5p wvC3Ήv=5pIi 5Fk pHkܣ6;5gwI״$5<ڹC]zK$|DCFkI~FA`Fa,F rPl;8W| !mi*N 5G QBP;Yc;YnwP-6Fm pHm-33o=PA\ϸ24XePQ(){o=( ^jI~A`zVcLЈIO'B5FU'1iQ  yeF CXj#L_cPs>qxo4TMcӎoO5;GcJȅ_Fl8夓5QTRsRv*8.I;lT*%%Hj]_Q52B'0Mj9M㄀I=%|;FגkFG?e4:b[(x>piq!`=%IAt,kGta,2)i4\/|͎Pk9Txr\ᛢֽ46E*6A^λd.nx~p٣+ W=T }KZG),Z/7=عJlP$CRdZ`DiiVy~B H㌲"Cl:>ULڳpoPn)zS[*,->pΝԇRxQ4%"Y&T" n<-O2b>Έ>p Q,9h1p`"3 oj@ eJբr'U˽Er(4>iT'P%6 A_Ⱦ?zj%%&Ѻ뽡,1O<8$1& L6!%U):` v'S\3_t};긚Zqe{N(Y4>Qd:ʧm ;h'QRsR`o#Gcj4( ȣ*]MٌӀ&Ҍ:S Nr..,!U t<v< 2h B- K& @Khq2>q:P>߮!ePe~ w4Ŏl,OM)9);،sP S ΧS]ۃC_g=fdFLiUo>\QDpɄ$?u D1-JOS1u/$OG ۀf-MUEXreޞ̜x4Q'F7foV7Ȯ۷jf֊i+ë,YIs/A,@.hf7 oGW}} "QM_cMV`)y!yt:gCe` D;gK9cok4>@xt87>FoVag(QGs, ='N9EnƉaaߙy$3Fei. aPn= T*k<nEpRv?tJ1|UE_wnRj|0_(ev}$stk7}Ak WEMe~vnH͈ wl\|Lp tC榓( O!ׂ=8P~EGW;MsC9<73f\ :-,GEIajYOjA y{k&T OWB1Td(7("fie3?tщ*T>7ۘ%p0J)K%e˽并$*''gChtDO9`IفT5KI_'1 p x*rK@V H8mUQ/P~pJx$\F ˳WJ"^_̱5]?!$:҆JwBCoB#]m6Ov6Ae4uSa+(z H9aiLWʴ$cΗ(䒜eX \8 s` Z&٧|. ~FJaZF3r.t~_{6 exd.g^`F!PXF:bw ~mgmzhTpO 5Z0T0ШQw:*\ 8) BQxyQBRRʓB3 N@t-ٗY iQR:$ 7oS,] B-']MK+NP\U9&f=HՅ@9-]a \n|{+ qv9t[X7Hzmmzh Nt=O -05%XlXc=U鴔=}ǷX$z #ʷC=a,rC9){>2@$M(Ř&tt_gs ךS:a,0)bcüAd1XjHlvL˜g3-zmK41"IK[PϢmJ()"E9zx o%oMjtca,24xE1*<A!.if)iD%`+K(t[ήa F%[WBM?q%$ Mr3:zO*/U:b7w8lL[c@`7%Jfn7b|;+A/^m^kl*,po4?e.3% h5nO'O_^ )n H2O#Ty;D- 8U3/dKBCvycKBJB]ReʬyFFw*r9[>nFA597V@kE@Ҟʹ`ֶJSC~!+qit|K5lxʎ6hmK[Ĺfn[3-{hrR*- {I΅9{ 痵ۏUO?V^چܦ.bPatxaHOHQG#rA"g3Hr_\ʃ.f7A`4UF hBnOщ]YR-.$QR%#^߀|^ϷZ[-0"GD9(X5:zea,QFDj-#еht[3%-}9xjsyA  0H7 Gw&j|R&[L8QDgx?r·ԂODEJ|*ЮC *ChE,  *SIӹnl:'x7G5.m@ƆF9ZH0GUjApŀf1oCNUHh *R •u}8~rCxF*7SMȈ:sk;I\бGT&6cE!l*m!BÔYC *IbP(3_u\k9 %~c=%ƪ-@fȈ)Ke=V̽υ[5;{wHo7'!<FVyYȸ/>o_{tCy%~v >l?~?r\*b\=be6ax]'߾rheU9 elCxf/Q?*?TB g*%E=\W ܊ )kmnWbU͈#5 k[ŷ?Ui%mnwxvH0ǖ]O^?fϫ T"L??~-?5,go <7Ooͮ"q~[re6=nH?FhxWcŲċU4F\9a _&s~~G(__iv޿ёZ5!9Y 6{BdSɻm$t3|!};nvVwa+/qɲd\K,V[2j < Y7~=f,|oi8%sWww`XS1g| SYz:Ϳ=)u&E9q2[$'L|R]:ejÇis__ԯ9O!mwZ]m:BTujSЯ$ꌎW;1oYc~?l)7/V!_| u9W˪(X/86T2)[\jZs/ӕ;iSႪ~M?~p}O08h"nNUÛ/c4{Ӥz4ipZ~{-6?(!>M~eG OVKUn6)!駴K~}Of?if{u{u!nt֘jsCՑ=Gk̞{F!2> ۺ,u-((?j[pM=xq-3*XWXd^7tI@gz}\٭۪tt~%*W{xH7ٷ~"Ր}P}e;y< 6յ.&w--߈lYv`s'(Ɯh+X4 9h<*Іʚzq8c kC2,a dƂBN\a ¼W.Jru]~=km*>GCkeF`q 8gY"wg`RLlЮ](ٳ~ J޶NRgx?b6#v͊ގbpg}agLլ[65 8.&Y V=s8kѦWx0@.`\xQxl!njQGr_oX.M٤aRKyDFLaw}O6Ȼa)om8ߢѢI]K+I")5WYFI-ޘ1jxȻcdmAUR_d4# 3d[1@ƑwY>TPb6XY>xG1iA ) )ik7nEqGӠ%麾"3zTEQ5F=@+ p'r)M+u:k2FC=,ˋjZ]M)3bU1ɥ *.lg=*?W4`KnkF ZR58v:Q4]7 =!!n횽{* 5lbF{vɆk,6EO,ZcB8Lt{%J+=doo4Xi0/M fJ(Cy;d];c;`smp\0c7P,Anl=fWܓY9\Z0"qcEM tK?{f~x ޟyx*._֘Vh$*7Iʊ?l .}Vq5yIʴ(`Lg}Q-*K P1'y8`&^P[,_a/E ΅?QM9hiXgir;rOedC,'?ǟ& O1I&>5W$7ͦr(`Y>嗃#x$Loa=+? ;O2V/_Udtݿ" 1da*+bXȳSqc 9f8򮐛4xUlqbk1曶!GȓC`͇KgyiN= DtDTR(rCiN36XJXGQ`2g*ZEw.ijl0n1j{, HTt8`5>-T EA0G=Ӟ s*";Q]1ٚoZ<$e]`6§( F+P.~cp#ƨ+(٫__<.gç INMg>$Df4'696P6pPJbFWlT)193gΟ/mxr.,(0t" s ڝ`Ƚ\$tFy)\Vߠ*ɏ qZ۷ڲmB/]=WV"2O M;15 MgVoOrZ~!o˯eߜ?~ia־ aIcI3`G-?_c.\/s]hʺe"n3s3sL-+lA+'snz)<',^Ɣnxl'lRZ"+#r6WY0yVݦF55ğϟo,r>E[fo4=^qE3^[Kn&QW\1*CYufnP D}CgPE3 !Y*rYWRTPuKLAQ=qXUMB2V8fN O-B@U?"mQS &! ~.yBsmETZi^Ḩ_+2h$ɶo38uyT^aT C8Qn'+~-٣o ׁɩ18VCK8ArF\!Ι \Ir4pHX]m_*Z>W Pİҡ5I> T ڠ=#pꚒ! 7-ϯuEL,zzUB1S/ k-2ZdZ뛫Oq=cooPJ*H*I;"#*cXi ΆGzsh͘=<HEՃ,1\|c̿-c s (iX*&M3%"uãgpyTD<ĎeF U,S6sɽ>jnp]Sg)zulHa6DʀTۂ\E ˣ%MzFw?vשU[7;g}u:6n]$QYC KJhmfr|lmоU yfA֛E?S 0 oa*`$H F+u e4K~@b7PH0ytA+ !'a+%WC^b˥n俿M5kHPC.nZrUePw}Ԟ1k!2GyK1V0t&P3Zwo-`.Tie/M!υz]nãgpnQ"H`t4v`(yJ 4$pĂ>]@YZvs{P1Yr2zπۇ}}O-xQ}qx 5Aw tm+@Ey_\ ✯H4DC%*(RQ*"U ¹(l0KܷhBXO1igmH m>i`pdn7A2 &i+%EY_5)RD%l:[*v5a#czܻwcdPs4qG ݞTN QlAkܑ Ժ12qI| AֿeҪ~kk#B ë́\ݫsB Ps=ȯOTT7מ2nvm\#K _鑼$P_##>Cl0nw2BI:w;EuovҐ ATF#$ވ?j$LJd1rG-!kDRT4<¶8F& &EbBm"],M8=уcd 4LڸP>-,lo,+Eȣt?.]r UEa8wE3|@҃cdû?V,?N!t-b*YV%h"a'1s*<./WX\WyQwR|7tTZ#Se@y_-tv?|dr 3] dq4^{g>`saǥ$0|#g(1Gv}<ԉr12qv-C.{N|cKGA3gkU@[4X#NwyG#/y|C?&g4,x{pK)gNV zܝ (Vqiji.0Ro]%[#g,u_wd" YNTf Z?'B72Y^P9e#ˏ"/|^Wܯ$j@22֋zpL9i}(-5%M @kQ'br̻.!̆^|w7`zpL1%Tz~3@quD͞i u!,,˹I89vY0rsME9 pUß&{ÎL2M],⡟%l,[I l\YTҟmCgt~ٶFUI7;2 8΀m$-:(ʠ$1G~q^ (ݹD2ˍYT,,@/7qPp(M"a^ <IqR[)ʱ_-YX26b`c^Q\5,+N } Gds#U}n[>FsH*dD~τABߋ/mA4'T\uzQ bŗFZ>.Y.}=l\ ֶɩ2i܋n7|5pY,7KftP g|/qyH| GQXVӘu1:GQ!q zQ (`,GiLQFTUb{yy"GEp'p -FVH&5N@E%uq+̩fW3>r,왮/>Xj"@/~sqᕃ)!{Wj}5%Wvs0gW%)򫞨k ϭg.x0Il lUF5 hz?݉ԼI[i+6mPp VL7+ ,Z5="!)ϸLaY)o.4ߧf uf?쏘+j|4l8P~% y7ҿzn߮Y[$*>L(B/B@ ,`@5}4`0o+߸*/ޮ"/y; ЏZ^~~H>oj˳t.H\Ly307b+O rA,j_VZoۛjsvk>(R3Q~φ!;;h=[U*gj˱t{X 5Yo\F_{[!UՅMr\ٸT.;ÎRq}* #~Zr_Q1o i8˽"]yv/@on~[.pl!(m oٟ-]祙P {m51g"㔥XQeh31ܡ+s5wO>eɇL`ocLfBQ- ݰ#zk4i9Kt1ٔˮ]b\}U$)zJQ=  gprɥ#=sÜ9G^ÁX9kEЪ9;YH++0'6YM38@(EwB{(:EtSO1سg){:ZhcY=GB@ZtPwޗM»1;IìJ657.ϢkoX).zG8=E2_Ppm n/`¡)rqb>fRWXd.ceTL@X?*9rϊ'%2Z*MH@J'4~(FѲD J.غ/,A^9"m6v5IeHH/K_RF6đ'_a?x4s+~Kl}ma~j-EM\EZMrJM#0ڙbYTAelef](]j$ʔ+c[xdD/̓w 4O30HD8AD1rs|z ]ߨ nT0prp4 C}_'J$K_ 44L4Q2-&+>+zp3V0p8tn2O{qB ڹ=IV$a(sk`dWT87zg{ YC[@Z=LJB%Q*["( }1"`u٠a,A|޼в\ځMurTDq"HH~lHﶻefQ(྽v[)W{_ D滞#5!ל&AЪ@5a5ÚP2KR?H8#vku tاj4&xaPPN-N2z|~Ab3Z Ḧ́8#BI|$x 0>Nc~㎣V)߁W[b5Ec9ZMc[;X\ilW N,8j(ݺ|Gk]"\`n.qκŌ:/;lG}3cUb_3GFx@U79m \+ 0}l윅s#6b|in|(xq"u/P(&킧nL}{w;_m̖G 6]5d_"K 9^8"\E2ʦ/@Ifl@#[`SedJB7t}c{W$ f&_?ϲkd <E^*q*b?fN4H)cQWAoوnc.4Pa`.]hf]Ws4G+ƅ4 h5)8GsHw+E\$CFݰS@Ի+b48#) BfRc_' A;~bDp(u$ZU?Ok6p͚{Gk!9C).R5};_vOc 2rHXgAY3*n PMb\PURӖ=9|8)V ܡ )?%4Sxg0#$#7JjNAo6A~hsyww%PLߨX(~$ Pfc=h5P]71n xQ9|B{0Y!"5*jЀӸX]`*C`ح2 SÅ-rtݤ)jYdOĪ2 i|2LofI}zzK{ӘbyQ^crs>I8 7n1ahbիx/Qqv Թ"2iaIϥ2IԿOY ?( ?Dh&!GQk^L+J~zҞyLTѺ{ 1^(-\fǶ2(Re:;~ZoywsW\tWn;Gnz86m>_8}*?[|{lk1{|~-ۭ ט}Td(VݖӶƔc\{_l{:˷W6 5xwa^ l~@u]2 4ϿG!BhYʿPEF (ϬgkOO5B25Bc<#40C Bc4/B|el{X"|auŵc;':A̡u1KҜ*J$ ~&, P5{p k&徚f>/m9$knT/a cN{mcҸ}[d*v= 8Jwp_̲(( f,e lUDz@`Og[vO3gPxPT(GO7s>~Ԏ R'?,C2tӎ{AwJkV ]qVCSD~Hg\y $^7y^]GZw#܇}JrC4ܰPćY[-Lg4#q^3f]7 ce5 D=2"b2j9S`YzqDFV|M>11*%7Q2xzq 78r'u9{qxCO*XEü!HF8¤1Mnt Q2Tj , )b- <砢a%'*s ;cϤg^4EL^@tf41m֎aBcz41Nȳ+V`.&\SW8I֮ *5bJY!#Ps ;Da[0+m$G0;"wy/d#\}lJ+̔d23#߈,pȂr/paDgMѹy͊,L]oGQGfz t"ݟPH(%uv*Jmq\;4Q͏_qNo7@'S S _AVϯOǯ,_PTLrU 3 BҬ͏_ `BaZ?|.]7y Z/ZZoIM !\Gw!X\=Ď.qu}ĕ3T9\->$^\>>[tMfc0CɆLAzL)is*KsX#>{M=G||wNDg >Q<1W[igfd6LĻ*'0EJe9 kU@.f$*P ~G_/ێۘL9Ol$@^v kH\=Y}q==6k%QZE0;/]S01r8Zi>|D=Lh/iyW&ϗ&㝉"DZ*bkv)`.!…kX6~ͩ.Yv%|-Gv{t<7Ho2g-/ G* k)FknQu>k);#L7& p9KGZ; /'ȧ`y(v.(;n4*HCTUrUGnٸ.AeZ#9#u6W 7?"B9HK|`cUt Ƴ%;Av1/+sY,l"!bAL ==&xA}rν rGњu햡UmSE8n(Lqvr.ݣʭΌQwU~&ko=>-DUE$RxᓵUZXx6r *qg5`Eͫ&GN VoNe~SM @^ח (E6ߜy,s1 OTxRy-D<:8$rL5b{`m5s@HrQXsbTfQEXB;P\C`osoޙ' ރi`NƬ%˼dPJLs-4az'9с9%kS s4 IְđMdVrQS@` lP KZ_= ݃]ߞmjq]acőOJkaR:$OIZ !IM'\7KZ%,rbv?c`x-UОO&6,DdYCF4rLqr(>"f)6^H(!/v1o_{rʕ7m20ydw+s_&5]f˖H%>j2&tPGE^P 4 w4,mF^^Rx!rD $5(h*Ҵ { LQS}.ʀf"I$v ,2Ip c`X9@;!}J8=@3l.$ղ^\VPSs$;%pT*Lg:ij7;uWdJ6 &1J _]}C?Ȋ^׊+NDr[ L*a Ղzr3 20^mlȞ? kxlkWYJ0'KS_̕YlojZQac˴mDDzTdE]Kd; L{/ڵGyF.d>3́ $.o 5J#5ժ&C $a#Fsߕ&ܰǐ Yu]^SuHQ Q̋D\-츊ã 1@ޜ7@hpbOOLg':r𥞥KYs#˳rKz:pIk"֑:c)'9B& g'{Ky [򏂉AqLsr#ENЛӘs {{s &.x#|nmYAR]Rklr@+o\ *Yo8ó dW4.=ѫR"Mu"byuo.c ₌(t~NSv9jB`ct>!!',: F=_)ϖ%cŏYe5X`, L%AVBAq'%a9-6Z ШHObW$c5qw-kOAՇAc?ۆhq5t3d'vrw&Lt}ZC~;YM0Y|O_a~si"-CcLH/\B1Vc6 @_ H腕Usmn翊`b;fғx?FZv$r,X LIZjnP3c̘/Y9#czxq8VX>{%M{j ( 35YS:BÈJ*߱Cfp&eC_K6SauPA);œs2_]{ hAGAl7bL5&4#: 8?ܘ`z4?Gvj2cNJTڳ\ti, Uܳg4 FP@>cy<+(]RDŽ!أȡYTErLNy.;+KP?Ph z&z[uMh~~϶-kMMH0[=l^ rtKf-bcNe|[}B7+_ݍxE7`pnCh7ri i,`>OYBN+ VgxQ-pbX_6.SէcZ9y#!gA[oҀDSof 8H{'DD _RB,s-E jI`:*bPJGjMhIJUz(QÝ*yʳ%+S k>gԻx(ɮ^"GO'A@__ߝgWO%( -8Y,$.B>X֏tqA:Z/bddhQ{:e-mBCFbWD2 2 Q.-L٢*62]2d6LĻ*'0EJe9 kU@.f$*P ~G_/ێۘ 9Ol @^.ٻ޸W.%qQBb` `o؅"fL ;!oӤHz>W鮮:UkLsݐf.G5 ./[/~o0+^Uݮ /?)1漵rz4 XF /y^U+}zu W]+WyU.ت\.۩N R#EҖǗzlf{Y,}šɊm+HTm;UF8}oTҜ)8B ",ԲW,(陦пôgjσv(ޓtZ$O E\xV V:R}"FTR!BNQA(LmGRch1U|3r,gy튖ʿM߰z^'A6;"ܾ)625ݛ.1KŶbIGguq3's,fZB :n==)D!B9+ t0JEZ цgiڍpnqFN -gsRx/t:y؂ôe '38{u-hMT~۫FBd ;hzw鄲=q?wr0')VH!XHɀ~9!}6w P{>c<#!(hG+(xqF#MtŜN}$O]fq?q:؇ouG̏d 8QWU*9H$y9*a_--Z3BAq#3e*C5h_7זú {k`$UW 0}ط>g>N=_-1v^KJQg5j=Ctt<\8fY ^?޿W)\Mx;s4Dg1`Kzco)\ mV+:{*& \kۗk_9L}sdχ?|7,@P0@j6Ԡl=!j8Z1N OG% zF:mv~oo],V)6q ӄ=[/[ncSOZϹhmҚ&M,~zsg\Y׋,';6vK_׮6rv3\ t;rd^7EƄ{h!7?ckZ ).9lqz} <^lèPp3/o˿;Ap-׃ 0+TXPQ8Txct_w;̶I#}4j+ةtHsA߆vտ 7Ћ.sA & ]k;Ya~+v,Y=Y;+ӺI w%}_{iݧ.6X"p'-~w~3F;zQFOuSJg:=5iщ( N6#9*a;xo?=f skI}Z2Rj bdz;YJA2;A2Cz})]lfMbt~xtk>n+0ݓ/v_|c^<+p{N~:^oϿ-w|Ѱ8:B^7R$/rq`/JO'g7~bX(ѫr _7'mZ|{vYﹼa0jEy\IO*kiO:=pWY(Ȕ`_nvyg\j:Y,?_~ssӠVwɭq2dg=P }<ܚ<;*^/PJO??=KVxaK0%y+-jyDUjpޗVY9n_Z|Ҷ"hPuY4.m6']}\l[(^}"׀0~Zee ӧÍ*Y]m|G dƷ>pRP~;,e=|QzH^x*Z իQWo'Gu˃?|DXWr[eH_]׀z }}E{ҐwMg;.ھ7|ީ#HA6,fC/~xwăGO=xjoܠE'r;0py0輈f_KZQH RVuK(Sp>io2.1 ^Rr"4[22y ;_m8ӗOOΛDɍS%в1r6h$+iw[p%eIƙo ѯ{u )WW.$L,e D2zU7bX)uX|)w&"Wa^][V>{46DagOJSﴳ>!G9Iv3z~{dAD8|*Q@e"Fpq9T 5 ^)PJEI#cc}wZ{ۖ=H[ֺKà's|S1'YBp8n~4r.hS>>#B8"=9Ɠ7,e~PC8P %{vqkgԳ.펋LNѾJ Y@'I)F,Dr=jJ(j+_Hx|. ,l"('7HUuKDX Mc!c.Xr}ަ$i =k|@3E<װ&7Hh˻7V* Jqd6 srIyi =j/hU$i򂷃o#|(~a^1YYfGQd&7HpHŤ u:,0 mxv '|9%<- K"[DdYVrMZEBƚ{^Rlek(yl#Ӗ"fCnڛ8AB ^T6&юYSP7p-:'(,hy2"X"H!7{EB[{m<{ 1`CRM=:y - Ҧ"rbk𢨐kI\&"B L-vlwEϞxG[b3J]Z6 5ZnÍ^j;qEK~Y)')s.Pb|YIUj3xIe#%*аQ"xުQB3pB)7ݫcM$5 r9#ZF/r∱EBΪ'«z[+7JaCFƉJtO+XxV`: |q o>ĈS$DQ'Zڧik23ޓҒ9ٳHޚ{hQH3 =*Yx_k|-zG]]D )2&0AB{z-nd%E(&ԾEBRoJY0&SBMoQ_ "rLAMKKQLfdEBN>h%dD(#R$/̊M*oH9|Ld29`d>@dMrTe^EBR#"g+"52VB2U8" x׽d($(4![]caٸ"D&j;J<'~&Ne$u;յH^){#W9k-IKkK@JsE$qH?%Zƽf44%KI8[$_oI˽Qv75gEx"&yɼjO3om̗Ys_$\{lx=0z$CA1 %mU.4$j+QPTp`Na/#3n< ʆ-9b^5q~"xqRI^;9+8~^)`P4ob|.wi\ƾUu]|S|`ٻF$WT}4m, xh{a5TӦH5Ia}#(GUE4`ģ232/"3"K8D0ub:%"xbAsX ſjǿzqZemڳigM!q:w:2>zKqd[ujXޝȽR޶Nc-tNzP~j>"H{]9#2!eLXό'!@)c'2$9bO1Zks J*}6yP4T^s3Z鼹\muZu:3gjܛ荽y+.=[iϛR>ț).W:/MzdK+~?&n1Jס0w$?[E-5;4z C2\+YjydxxCOTfw5;YE#yi&<8P6z$h0WT4v ^ʽ -^c{ ڴ:N~`tj˶3#jwncEƹrZ$)-갴O&a)E/0?f1OMcc(^*c\t`t,>F,MGRu$D;kD2hxHRb9@ ޅr;-{ Rc 164$@.eh K N[2&Ey>iLYQɢ=;`/ ]5F2V(֠% !"̗֡)㌞ $z8 v`2e#pTP|gif4_ M"1\Krgt9=jI/+kPbpRԪ*aCgq=QC.҂οRI%vͤ=/Cs{Vd"UjH}ݚn l4L2nޛd2O[@(*<l x ^\`9lc%IW1@Bx55NPS), tD_kѦ$Q0sX=N&XNNJU{SQk,r:A]Р#{Mgdoqn4O/4mL|w=VAoO$M' &:Hs4qq7c9f ʨPAIyA)g1h @SF *)/{ QH6rgxkrgh ':Y_\Q8:KŜU6M"e-sKMe,E*6@8lHD ;5c B 3ij')D G)?'=gZ&lpK;; }{3dyu>X!$xA?&(0jΡD#EဌyEC$2N*Ap )9sBqà5dp6ۨ=+f@A*#1a'T 7q>C8\eNreF15m r .ȏi޺ *(vlW@񙁮Uc[VHKDZJ/?aW?z; nq1ACgD$g{7'+ gAA.R$'?ZCP30ϸUKnp"-saM]޴E\ KP$zKs]UP!b"/w7eWTT^}~~}]Z@dNO ,o;6Y ï\{#H+Y~-U_NRjg#:p!@2WM^glA~؎!܀Iuj/YTQ+vHLj:e>\n{?5I.Gq_PEUT9z?mU]a"H_hx:ԣ/g^#n)wÅ(pmm].Yvα8p>euėrAo+jT׷o~A^%[tS^lA M?᳇NmTPnFިOm>ޣƀ EzU2B{:a^Zײjx.I:ڥ> SawNGrjڟp]h!gkC>]"$-clV[o>/7V2ar[XNa\3 {m>lg{Aǣy{`G.?{WֿyUh+ZZû ]-|i*̰g,|Nٵ'a[꒹ټ_gBwxg9fG h#zZ%:{nK j~iDwLfִ"wU!Uh݌hn/:hͮx ^s5f\ 0f7q&{zwmPtEʃu|KZ糽xۡR|%X+QԢ-X١[ZW^gJ_ J϶9'cƅƊpY[lbyuWM|V. m e\.2u#Z:hK ٨gsEޑPp8E7[uj.]X"sXx#+YrJ~]4z|UoF̯UuW&y{?O^Afj45h;4gl!4mU7|̦OvtۥK?ГyKgl ǚ̘sJV&n2kcd[L +㜔^|8~C6tXRI!,f?!& 0DZO7N>c98x3ٳn$Âb5Y sA`?~-M:(:Q=[ݾ.lp֣j#^pىa!asr~ *1PΕ4X1BqVikJE^M1l0f?%Zn!f|`!k0sNfްG|MTt-"Y/X!$?Ҝ۷@JkW^)xaahf(|3k̈& G~EgYdɼ)XgGw 7$Qv.`f9cFdEo' E {ntrF?doNjîKUvܾ;wrZ+9 t !nEy)%ghbmbqs5궚Sor̃ݐ숻'JIO%VduGiY "kѬlD!jlDG|1yAHrsUX?yg /-c\vo<8RYymʞ Xʮi3잷O'V>ɪk?BQ]&-ɣCGg9amٍGͮwdg?={;j-c\Ĝ7 wqC-i2ƹG4+hz#T b|2AgXZ~՟NϪsg]!9ѱfHijSVD2Vk9E "!ZD-]]iBtPwN[P5j:me2ZBȏhput +L)d"$|0e ;dkkrVEڷ Ǹ_\oNm^rRrvS)\=ej'v H!$qO"ݍ ?1`.G Ja! 4hKL:Xgh)-bkYI5!\FtiwH ,M5$M{8OzT[lk.p&iݧ|*N9]Mja?{ukv5q`C֊v럺p<%8LS8Ax$#8u `Hfz3Q XjlF %"0.a9r+Ž#ȉXKBKL$s('#FR!%ΚKE2gp[wNfs3)) mTܷyÛ?],ǯ/L+p $A1^B <:RtpwaHCdBRVv z{%mMo9>FgpgD9 voJ>,35 rqrzCTc9UH!!pFЙVeD5_`"Y9$D5\M+yQZ S T,ScVKz'>j0"GZqB>ߎPVz*S|#]~O[䒯gFuw§\lnJCˍcp6V‰E'B.<]O@Ky]5(oh Gi>ݼ U3uT6…γ`}MMx+ŏd̗/.!mz}v?++eq,tG3K1YF0 g`8#\u`@ VjO=BҎyMg%v2v"-0L/i4SȀ€Y G:,+ VU** B\BPBuJ\Bo%<^o97 obO& SS1f\S].O><|tfoZԲ~ζ.sCΡE2Ę!X1(B 2SJ!Ȕg tub̅8xPR0;9p(J)FPee!Y Қ̱mLL|.+F[6yL+~(d.Dwpʻvef4=3Gx:Y<`X XCa pĜP0'm~ ҭ/4՚%:hZ;ܫe-.\PSuga\߹E66[,u?:xLy2$2&#DbuluTv23O&4 E&kc̀3N.I4 )ydl[|:Ag/ R# $rEcJ'c~dEQ@.]f'O#{1ѓ'Вa_/nd69;N 馄Le md$X4oyH$nԷJ`(͏ķ=q)ă]Ƈ|>od=cFAۧWw]^|d[^ۃ c3^oΝ :u-xq$D ~C۳G7<T7L>=#ifoy@&xV !8SW 㯳aAz+w6Myl{:>CŋRnN~Nw^wƩV*>s u~7p:teSN>h.5B'xPTLu}_bY:AixNᾝ\/WXrK_N)c1B&2"/4vF"X d%I [K~>|υuJ9W_۟#/%zY>?A5~-inbn􄣷~9%q E /+ǙW=_Hx>VgAߒl/'dsx;Җ\n|jj+oObƋtWElmKјABBTc8ARZjȭCJ)*}YbĭgFmw§MfZU/jiA~7:sm/uk1?-<=;?4>e9R\\ݼܕ #|򋶗y.Tw>`~7A~h ^sڜ݌_moo}uQ fdGiCNUC<畆 QLU3& :%օ1=c+Q,k-lBI;ykneYJmK7lЮ ?B+A=0C=/ `> Bv;olFQdJ@9blYx;>x5+A}ae;(-k3\bJ!˜*rxJ;Rz'B j(#y8{t6m|>[zOŝfU\|òsS[| tDw0A}Ty׎>*iE!J G/5=eJAC|nh| ;਄ @ cI߬¯H>nI. I5nh6)Ƹ{GB&.ʾ'q)JhEٌ3D.LqS@Rq1Q>.t><ՏvsS12P?n:l7ۏE{qy*@xl-pAYbeZjV@02.ȣ'2c*!3ŕΈc a ȅ0r!,?C|>,v|˰ Oj]Klǎ.LRK8|0~N!#:4Gb!dgzɑ_ew6U>4  ,iPݥ\]ՃLI$법l>VR$3/fcr֍]v_뤻Z/r g?ukښRYImʒ -6ZeRvA$ `\KϹI ;ѵv] E !G\ܱ8'Ҡ{v|_m(&QdBb6QCАpdB̺,!8#f,T/4,+m| Vq JΔA(ѐ2%dQ 6j򵤄Alx߅=[b:Vu 阳1m**emLD.RPF#EЅYV5j!55Ja؋X򼰪6x+{~++^eTi*kVώjXhFN)mEɘ'aμ:jXZru%$)HߔV*=]2LF,HYC]]GkZ|F^_y- =r:/ D!N,ZHI("B HH\L◀sEccOn{R(Fa9@4XuTBTE(;ýT'eF8EdkʺhvdZqr<*pReWR9ӄf>sϢ7QUO#WRJQXAx!a,&:AdFQ,xVDE"]G8H~ɼse2jr6wB%Ѣ>^Ά;'MϢ4!@F/xUc+`\isLgPId0 tQF$E#R l7ݞvTO&" 8ovMxC:g m.~cv6njwf/e௖?h _wQ>w#WmjZ B#GiWli1𪽩hힽG|"Xq%јr֙rbh $VLfˇ<=ŃeG!Y2y-K$SNNn`C8َ !~]ϫEKTSHK#IIaXt3^cA/<`89Vtttdw /g>]RJHr(EMV|r i$E*bP-t4 8#uk: -\U .YJo7{)Z o̠wO{N?asmw<ֽɻ`Ny?V~޺!]v8?хdG[-Xj΍.BSA*_d: o;mH3(YdZ%Ɇ}q>ސO)͖>O `#"s&ߒ]dxˠ)eh;8r#s>(LŲoˮo].~\Vn |o. ;ۊvvX0?{On4,F6 V_N&#[Qi< ?"dk/$7 KL~9'iqt;lJceOZaoJ>Ѐe ycsq6k{ cX`ONUxZqp԰7[hLd-w|{ 8;5 Ztt.h NץR\H|cbYPb]3d `1 F&dزZ`ސ>n2c^ۛ/\Fٝ[n9mGB;F'Jsc\r?m5e)[DM"j淈*(PvT]f5Rc^^1lmN]햮xzȍ6ؒK4w]^?/֟* n_Z7֛tf m;gآ֫sju,Й i>ȡ #EfIiYx^C+$'^utlLQ@׎w+#Šx@#犚NU:%7rzӻ_>@/?tSL]t1P)>^P0!)`#Ff z1x 2ć?8c Uv iYgAp,L*.3j(0XHT-gsG`E-Nwnaf.&pJpzͻLaM0:QtiUq1$L T@P c 68:ݏ/%7F&]#5pc= mX1m0蘵}mCY,E[螣ٲ7UoRo8Rǔ`fRuJKHH ҒM 0E)c' WYVz򖿍 śyץ5EvOTHd705|kʑm:>_Lz滋+4*fU)Rd]5t^]XRY)6}:cZ#һz.NqE.k|?rvaNO=.LV;'+^-h[uB֜o"tNt9S1vuǔL6ŕ6<#MIgk&h'j;6i7XRb1r?vub*9]^37j%.\)oG '<|ONw9-wơcG|ֹKس?$'x[^Oo߿vQ<4iz)|9~CR$߇eUw˺.u#zv9+7BbTZ9s @yvc$!'Ҡ{v|_m, >$ AHQ&j.LhY8s̢]؀ځeߐe޵q,Bi/e ؇$8 8&T(#q[MRE#ۀ sf؜S1(P6LUv[+J(b*ٲ4չugcRΣTh4U)6EJ%io8,pЛ5fHб ]ӺuM{JM{J#r5:_vzU/bͷvx{dH}Ako—~F#pX ,flH 62ܭzVHǺ͹c3|LmHx[fLLY7 )k\U2'3%M5q1 =x&dKpT=F:2kU4)PC ˮ0XIBȄ{Ξx{#@h37YsE`glvS^}>J-O¾U}?QKbxlA cwV[:*q]#?Ks}fRF_bc0Iʅb9XCf) XeO]]qeioCNIL^^mX'C)\c]0]CBQbJ7I ZWV0מN2@6.rWlb&[k@O729Y2mF2ߝ=K7]~nZM8vtt$t&63,|H0QkΦtDa܉Øʇ1=au "A*ʰJޒl9[rN)VeUD*+o<$)}C IU-!(+ \RA (F.sCngJF2ɊmIȭP!N1d KQ$R!rCG!KK>/w>rBzȑRdduwA5 hȹR;*?Py*YEC:Ԙ[.A [GPJ+Xm>&#D.W X9iМm jؐ7PUB,gMj ^tX1]rn<#'-h&FfP˳x"o:td.ߟ3Ũ4T`Ш28Ĥ1FcLSk i9^ty K,8 SmJ}(fvm'73%p^ zE[K^<\u>~<5e|wozw\vzhpgv fv 7󭜸:.3(ϟe@ŋZ]1w[MiTGTd3GToj\5ɡcL0(:4XKDq灣o\X5suQN]:Z]VmGZAZm^.e-SCF^C f4-#.[[5!DCZA:P}zkGz X+xIgcs;\1i6 PXM HJ;k7t5ֺ/q\V&TA T~ſ_7ֆ wΟ [N*ww$elLg/e]7 5q0\Us08$pYkASqνypz/HU: B+ȮfkVGYdI+f*%JSTàF~1\T6b:]muBRT9{{ٝ'+pg>rJ[bؖP:<ĔM3Ӵ1CF%* HZen(6ZA5ev?A|yr̟hCO[K\4ᥩ1?xM7fW cCۇnf_m\)G󷫇ۇv zsހ}6ʕ\yMۦȷ|+Wnˋ-]ݾ^ws9K_]g><Ѽ~K{x^S<>|E6Ϻ~yp9rӫnz\qe;7oWwsVvߋ㉲oy?ӷ_M./y4 r6otxc`Vio>\TsNj,NKFLbaZȋ?WILb#\{^crC~N\x(-\ZNrW/g˧>.%u߬$:V.qqh|3~,eN[KDY2q9Y:9f%}WA<>"%T11OFŗ'7oyiIF˻ɿLf7nN v^;}|{w޼RY1UdfE`쫯^{Ws;3ū(g>E S}0Q[mb.T6ONB:@AsPh1hR!LZF^ߺ)b2*D3bWdx@+IU fC<ėO[1 3`TiH1 6orY%&SBg,}OPgMކ|H@ݪ,ύAǿ-LZ}N9˶6ܓh8h݀5 ggW%|#O:W@8YT_.g N{D:uO.~įmAo5o軳xL"B^("wJ\ei@vݚARK4$9ҒX['W2%;gB  F!@;qmkY2ޫȷtbzh悯$B(U&-!K%r b\v&jm1VQts1]Ԯ;xWZoiqrٟ;{x 쪯`*6 (hq>;L[*hQHf*0u[Hêu>FbL5 ͆(Sa0֊ʁZeʂl8b:u31 Z)Q*[4P"r@v7r8 MsC6lZvM5)5Ek(ňUN/beo "ZT7E楾85 и0lF)DHt'I^:PG$t7i:+f!cJ!rI&.Foj쳱:Qެy,_}1~i)1C# 1Lt"cX $$#h_A-x0^gj\Ԍ># "S5GQv6 ?jwO#%ϳ>6aeGc-2zGH!k84XuT+?=BjgRF_ ;r6ʇ MR$))ʞ"="$<2/\h)$Nl[b5U cʼn?+4nfM֕鹕l:k{h_hN ũVdP6Y1WX HtK,DN~r=ߍqNE&ЩTvndޫjX '{TcN\y[~)#hĪ'#mf~`!:ڷWu \-#ցbCb썆^yYJQ'8emӓdQ)Ռ9{_ZJ^Wı4+HVGr( $ ةڂS=I )Y&-39kp̣&Y zL,m'ٍحrn4 `ꔶ%uJNi)m;m픶sz NiuJNi)m;m픶ҶC 1a*tJ/;m픶ҶSvJNi)m;m픶Ҷ{sOKuB_kQIcg!VvzbYvgiأI{S28g9!ZԝY.Dcڬ{Mg9QLz9 mO3/ [^ @f10xTʂ)2Yƙ%gR- aR# ̀ARD3%y2ٰRT,BƂTS(d6&*&˙7DV-W敱!r},A+s2]b 5&U:` xV>W}e(tg д*9r("JIIiY VHObGb, 7XI7QlV7v%3G+KC>.ي`NڵR4y"4WQ.rj!4Yh@8֒KC'>6nM_/PFP+Ϙ=}Ի:57z-QWZCT`s'xpU eH)CAos8]$7_y߫FY5:/aՠk#F ֝mNiy5垞ѯctf;{ 2r R0KK6U Z[?)BsL17-aAh?o[V|w0(8jF2.?Z278$0ݗ|wɧ  ma@e, }8 1ƒ@o%>3YxiaGeY/=~|exFْu7X16xz[/#`沊^"C,Tu ]2F EPB+/%`RdGn273aZt-*/x7;g|Ӕ40~gn\wEdV`V`+:;1*(^`L|10CWn}zK:l \2Ȏms\⦇Hu,^gQHmq<9ZZ1\SΎܦťMX=RFo4f>sq㑛 z[蠃\=IwĒ3ɂDǑ=rn'NҚ.":sE-}h=xxʻfxh㤃ë҇9^ܛ:vĽ^nĞ%a&q?Q\WߺP)@"e6^6A h΄ЩT3_RFJֽnכuYʡId+nQ(11uzМ8L Iat̫yd 毯Kė?U+kEmNe ):xh– MF.ya*x;"z{إmڤ+[^o'ދbnoD'og˫b=Ə_ph.:X${eIMUԽ[GV*[Eq(nU⦊VYqYEq(nfŭUJEVQ*[Eq(nŭUVQ*[*[*[Eq(nF.O7(1Ldʉ~52(oZ^j{{ۅGywͧ?&diW!]!S֌+"q=+UON9`J27Kw ͛W~4&/|/fd{.|!M[iJj:(~o۹/`8 ids ,Y>s^>>i^G|85ߕtL_3.]#u7FiQq]*/ž’8.ڒ5{5L)r|PD(V t}ǎUlkH[F }r#2,]L Cp,%g29qw; BwV@hMܷcl%r,q.e*䥕O~&e2EVflZNq9&PdQe+gO <"RIkR0VbgQro@vs4X?+^jkrJ9f٦,YV]c974sg?b'䚭 ώm23icg`0?M-:> &CRyAj5 ,L=. pN爙Ec6u^!,+m| Vq) £DC`JȢ(lk tUbbr4wbcccrƴ PQ.ke2 rRH!Z@6F g (B.0c"""H{;(_KJ)jMlP=s?sdI$r#:OX NP:ȱ'\7 mh1E'vX3~g )c)c13xMȾyV7%JjOd Fx.pGd`Q(- .Ϻ[97VPpȅ`I22XKFAJ:P)%౳M{^V\Z2 zq(~z91>$T,Z;徾59#׈]@ZȂKl"B(A!{d~ xQ{0G|BG%sËW+,uh(#+ D![*:k5rR:jQ/T<E{E VkJ:'ǣ'U6y%E3=F .qgX\!@RQXAx|2 jn ρ\2c(0 !+"qH _vz'0&CC땳"U->נ|Izg>V0uxVVig=Sϳl0 tQF$t= GZ I*yYJw+l]dgd *}ǝ=/]|PXo堬d7/4ʆ֯(u~;8+ȍ6L5oV\Ԣ-GZ|1?YʍfŔUK٪GPmN߿ovC̮C{cN'ЧYQ>0C=f֙:w"&2e7XY/1`y)tVvE]YdFtt>ioJ8 Z(:0KӅHVyLs/:kho{zڮ>Id<܊XhOkoCJY0 K q2dIuܱ!HEJN1zIH͔H+`hY(TS(/&*&ODyVb敱!r},'vA+Ln.Ӛ}v*qS2f+f|tel%}MB#)2 )Vk`$+z~|ۚgy?inVg7v4 xgyJiд\M/G]dȩdR㌂&ZK. KLw=BnzH/=u#a9Pv"oghb])^4VV=E?]R;&eOkz_wYMs~~xGWi?mͰ~NAvGz胫pD>:eʠւ@9Z]sNWu5:QHe,qV'SWl2<a8'g*#9"sMPQ mm4*5>_ӧ9>ͮrUg:ҀnN@h47Yg2+m+EȦ4c)-z@T[CvEIN*`Ip-Qc_Q ZȔ!<43)\KiȖwL oӄk;X~Q8퐢lT}p-l4g⨩JoQmG9j~^l^+0fg\ -轩iҔ^~_w+f;|h scl%]LyA+ Z)/SCCpBF]]xu' ?D՞3C͉'(w[k%[X+šp%%4qSuIwkBe+ySSqj$0P_ߧӣOMyl PSڿU5 hE%R26I%$96wkdy6m^ȏV ipcB$82sJ|m#RBoDr3߶",^Ϥsg \,7ލz!r6ќ >*fj. .`}x`gwɉz |~lK8/B<[<ޅFBZ|:0՟W3gMv{ƛ޴4]§I(F i{pɨ@@&oܧi8MJTR2M)zɄyLx1Lкu (26̦=dɡ%ekR=~ / &kp<ߵͽ~ޢCFT776R7sZg35nqΓחtYAx>&ytɎ?0_mG n_Zir{:]S^ͦClI%Qknw-H~ynz(^ ymh~82.xhK 7KޑqAϛ¸yE6_>aL:}rl "6^3)I* p/|M#QY6 L=Gߦ?zcdL=_Q=MMXmA?5>OBՐnko8H/Nv K(S(iMu Oqhcq0O}7m_ǟQy4|ᓭs̨8ӨW%3s~FLbo4<3rE-g鷆f>z+d~+"fSessybD&'(7[seKq .RPZAP^.x\JGKÓNgbFfCҤL#%IZ AETk3QpN2jߥV.]Zse6BG(j[g/,)zù1]ղ&Ƭw2$f1CSA:R)KԔ94bǼUwf 4@f3/7sp s]A+aQmwZr@COf+?N 㺤`D V'yNhƟ^e<{Ues[5lV$AR4S8ƒD##Tk~Mvd!ZCt] ]B:)8#e>{qc6;%^}4 ߳?+lV QhZdFq_KF%OCѷ.NOۘ^6?7&hl/adY7\%ߜq|S| ߹lZvqXk<-mYX [>P;{y1;<_6 1fǘc5˺-|usaL|0 Z^UL[Hv*7WڰyXpI7շy)bdeU;߯Y uj!W{ZSooli1TĜM.t3|}0SRKKXHs[>(Q{|>yX+7n`_zM.zHU&ڲo`> F I&*҉)~%x"uVdWW5Ц՞id !lX-& IzÅJjCesph5!E k]]N޺].Z`!Jzښ\x{jClcTHLyt)ڜ3)j+^\Fjh;z?&}TRUl9 ~زf8̲l^ixɀiRtGX E.m^˒'؞Z\BsHckȀ5}G6.xDʔIA$ 'nF]LYE'\ {q Xc2JPAֆh$υ|ʤfF$Y.p}5xlsiuvJN{vXYMPʻ;g 9[(lPFg12jRAP4I9>rhEytX-2DcYx|7߂(b![Ayfb[_ۗpZڂj4XT`bkFy"ڽ5NP%~}*iW9vBR-ppr Ç$. @nߛH'{39ɔ{FӇ?6DSoJ>Mȏ~eFk_:^^wyہܳ Exzn]ugLf{F<}ۈ-x;s)? &y ր :1\I-2 APj$@)ةFz}gU啪3yMm126OB~S[v܂7ܥ2d) * 7* KGᲖUMZzU+cKqNRBYg*i4|>%llVN…4"SM} RpçQf?ETIɂe[VztOt])a/yLͅHc:)G Rge$(S=<_DӸĭA|&t čQ"i,7it2[*5mrI U8H?SU Tii0چ]ZpnGEAl48%Fu*53Ցem:j?jjH*pDWyO9 1ifHhxa3 !XAB01.O$ b>&f2?83ZzNIdλsNK "!y;eFxh3rRА{_f%0U.S: AR'cy@bW T F|g̉|MAāJ2kd(0vn5=|#wƣr3ڤBwAy{w;%SRA_1Q,xO2OHA#$ AJuy8SE:`:$W8"DBP<!meP )pwX)S?,?mL ~sum{`uߦT{ vDk 7kTQ ZbӍ㿷FW\vR]Ri* \fC*4!F5g@Dt I)pr \J6[oV՟W:e&t1еl;u!!-d(k&xzs \Ú2 =%_B87(Nmi}GKw|4y i$xc]j?85&#x'g;%˘l!RWE/K{DѻKgL@ԊY-|$ 8AKVڪHyPx3E7t_\hYAlPl޾LiJOtUZ?=7Ro5#tZ^_=x2h0+kMm_}T]͕U✮ypuqgw}fo6z15 F|`ޱt OV?1aR 2ΐcT081{.Ny.wk*bGDID>Ti9B)_Hg M)Y-A$HNE|6J)KQ2x8@^.TF%<99 Yzgƹk 2JTIZU)"%=ŀ1j4.bs5M o0dλa8c,(Ө dUμRD)+N,C%z(R0gO>dTB60&%JUD置["ReBWB(^Qt!zi|]+-[BBafp\48F֌@g ExbPiޟd#pCW_lMmQmݒU-? jǁ:݅,4zLPCL70q}_{Fix$BCFǏ]]\Of^g<r$#ݧoΛ :9KYB0w34UíT;&u;jaNߎf:ja^^&_؞)]5H7අ0;! ܶŭЊY켇a$s'H8BSh/ң01=k3A-nſwդIj!W?Ue4|:W{Yr?,$KڣyBL~蓑GD'STYYNhʟr5Lݓh&|ݚk2ͻHbo+M9~Þ|_gSyfӧ)~LX7ffꘚwÈn*ft/oi|VO># ⵪^pJLp",?^^L?q"jD (XV%oq1"#"R/d*g !!,wUKKZD0s9#R<!B'ni7׳ې6<q;wݜj9+Mm .͓ :󸀴F ?%%(]L-dhCt$W'%뭜$$h=fL? nYt?I(iO8#U 9>N?-Υgs}v\m6HhBk֖@F+SƉ (41ZFM6uަy:olM6u6:oSmM6uަy:oSmM85`N7JBPu:!dy @Ɣ]$ ] 9-Uc,|ÓvJ_,EU=O6):%%/fv ; ExPV`H B) vvlIc3CkHS;zN5KVk~u&o&ʿNBd6[}[96tt t$B+I>;U4vEL'L^ۃ◫6@GnSzۨJmNTJZpwJ~" j8U[R]1zoW/ t-n iii idOW6uA!dGҔ$wEɨzUzk.$HIC(,*SIGMxR$3/=:]rZ`3{ٝИ+ʊfݽfö;3hd|5%jʝ()w+Dӹ$Jt%9:> 8"8 qYC^`dWJAItRsH( "B̨j(6_E5z#m1I$􈘊r ]Ql5gsݽ5H2~ajk"&]w~<\m )<1h@* Aݧ%0: /c]PJ^md\%D˔(JpF\(?z=Xs6;sˋt o]-: DUŘ"RS @2*6XڤFc O`S/Qmrd z:ȐKIc(٢IzZ`1UABBD̴Kk;Mxf^nqJxqǷ;LJ<wMA@)9|rHyEq!b=W}}$( kٷu"P.WQu(|Q̤PKWAy]2>| )EBD{l(({,K((g*Zbk&zQgP!֜-ܮbH__@'^s\^}g7SOϖƔBD^yR6c,3rKKޘщ6D͓R!"fޙ~/Dn>J& yH/d,;HW%Ձ$9-x)z 5:}bЮ]SLjWͳ?Μ}b8v@^nWrΧY\;+00["_~:]Dj:{iїrכO:گ`ֲf7nn$|mSQYSbPysZVb+E rҪ#CRCM<4l+L[$_VV) VfϾ`M7+3jYrxvfK,f3ϺY :9a5سQ͐y6ӊ)5ĶSM-uh|77pNa}#mXM =x*wL܃#={aQMsGt6AlX@Hw;| ˫ qf ,qjQ)4o<|ݒr^wϝaf]1:ik6r=x gm(/ EMڍwF߀]>l|vSSKl<($V:AHH1eґp0-<nv 7O_.r[qOU,p(xeҺBTuaѯUG|S1C[BA8T;㯹,}BTTh5;CAwQY)Gr 7MݛG%SXdG#%v1:9E=JY76|54]dSF(RӭӺ֍+s19,sGLJ!m2'2/`,S?lwhm$q,q.c:@GCM t6+`c2Ec Nf@:- *f`@i9ʲY)aGT@J7@ʋ*Q0i;)~5ѮoO%{rۈb:SѢ׷ymMy[wj ^Rj/7 (yNdbBt`\*h-/d*U\A(xiˑ|nq6ӁKYrAc){)`r9`1nd\Hն١YFr[XM3k me[h{[x˝&`~]tz;]׃~A*bWi!GD%4 I0 (MJ˃d3Eb,GMYFL`|M²66Dp+ pE!Sh L Y4XHY')BB =1c+˜w1cQkJr2Y%r$b n% ee\EQؚi=zigbKɖyDmb9>KMN۳"tvtWsb6C3ִP}x3~yg ]#Ho|3_5gK)=`o80Fw$rsg?՞K|հ}Px0 ,;ҟ7=_~A`<µ[pU0"Exc3L"6ZeRQ50_\Fsh:♺9rw4(Ҿ>_`?2zP0(&zLƼKnŠV\&D.@D1Br1R$rK[|#GZC"vPlIo[hM,N@h08=gOV:RўJ[KG>=@IȔx${r5A9.v3x<>SYq:8!q})xH*(M9T0^ \tR;GN̞$glYSӛRK|͢//'|MfnzR(V.T`GоgCUJ`t 'J*kB(zJzHi瓳&N ;Sө5ʳRLF޴'4d+F 3`LYa5J@!Ql6e m-\$.d8RI(d ٲuBYUyY8;f/ u5b{t\)R$ fq]ˣ?5SX.eOrᆜ򖻤DE qoD\J#d0cZiODH Ap2_Rka ӏ:hfZEǡxJ5(2Ge*l*;(rsKb *}Ixl0]Ȱou}x褄:[ブr3b? =q&@ϭ8?^_5$Ƨ *JsE LpyYEDVH/K6)bzJ`8,ڠN%:1@12rBP<ijH̉>=j4-3e SL'TXIw&a>t*Px€("h/)H*&v*TM':Y#=->S'ߗy\~}H;Ig;]m`6F.<_ବaDN_aQY1Xf/_',GL% Нy@F\a]JڠQ\ h%}+o;/Ь,rrGj&ls=;ƻm$ E0'%ZPpۋyN{:cri(v ~L Rpfk ?KZKkHuK ;|,E!G^a0v=?7R"LđVo_b=1j WΞfI`-`Ĺ`Hk,"'/)j2(&D`Zr{,cj7xU@сvo(G$ )(:W.Cv>Hk>ayA疑р[;r\oomL?{׶FrdKKM/ 퇁]c`Qogk~sb\7xOl}|̧YTڻWWׯf.LxqXb?ROr-E+xg}..?5,Klz] Z.\z\-^PeN#yV77?mjdL-;j+ȨR:+K*kDJZB5m/<\imb!"1HFDI}n*%?Y't?=܊4e?g|Z=Rڳ( <(zU:EHUHV*6tg<9tb,'Q~lU=;ޤ=do=]9k#4mًM cBϬ}`1go$W'<3G[Q{mj{wx7ƕ~Uz5<77s뇵psw+qL\?5?/>2?Z+q@o}N9-揳/s*q xz{G,*IY-RI[iGi?yjQ4l}`^oE_RE&'ҪGq^QM%H!}}I.I_(!9&1k-bL)bJEUӸHj]c}'XcK4^HQ6k4} 98#5Q&tI%#AԨ+OvJki-4}#>-|5crIJ[׌R8* HTGTI Bki{,_gBؔ LmSV:2eQ}h08?{ {fxӖt!dHXl{1jm8m:@C>+ ]htVsc (Hv9kMj,(t_XK(x$2H&rZ!d^1@& .K}4m=GDC%YZ{[{#f@܆`m]G V&ESY_ ~}u3rp 6Yr& N";>n `v]jH7]vC8cmEEb]NBW @4,"s5}7%TzҲS~OXvc2sخm}rDF5*N RV,97@Up^IiyFk'#|:_ku5eI *Z]5Rt-5J#?IcG4.j>\/܇?m]{2 =Sa~2@\ qa . ą0@\ qa . ą0@\ qa . ą0@\ qa . ą0@\ qa . ą0@\ qa . ą^na N0 'SHd Zu`@/0I䅁IR ) MU+!{B?8^C PQ7֜9n-ur;Jrb5D7f/Ex>_\.{VzL<Njg)Le{L]zza[u\~#[+U:Oge?R\_uDsR-fۿF#~{W;u~qeqo(UB趟퐏nK_̟o_/kF]Ƞ5<ט F|Uaʗ[ߋ{zs_on]qQ$ɈatG/ʱ%'eA\ kA@Qݧ \hb@bO`6 P9VTꝎtXO^f^n;ϳ_L~&͵qf5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\5׬f5kYs͚k\\͵%qZZS:^haG}5/QsmknpGúQY޾@m芉rx@UJ}߱N^z=+:VK_6ն Q  8*c5iS҂t+Fu}~ug 9oi=/`pczo Ϣg};CֱE,f7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7YrD[{;.VR7εzluD~VF::'Q'p.xQufk'ר^y}gEfOe\7*kM$Ilbi+aHڌ1)j q]+mO7$QU_dF|i2*n2V\y~-Xl%o}]}!(=@uUe3k@wө $u) $L0iz8b ,q(ƎG9&ɗY^"{rmeYNՍ7]SGPRF_ǮT_j}=|e˒=%|+o4]F?]'ra|iԹ~0Zf̧tm6y4I% f%蘌RAA}bLi_H;w"V lpoZ@qo06(op3cGRk,<"pjHJeosc67)%:?WCi} Ixs q͕/a}hCa刄a=f欉Vsxe:C:!F0<=h1OnL ^F;X`QGꭳ'kmN'VpHDsJu{MZPgL4N+48U)yTAc"iF-=lΌMa؟|=0]11&iB@D6R6!JQ 2w|U<>S}7Rǭ#;N,u܀;72_?wޡ[SƸ[[-lmq s{󸑱n3aLJBA<[V?@Bz]7 q3I|I'%s)2$ģiB*j}V9.Δ#[[Zú|P+!eXgR,b wE)IQ\(2Q.$YQWT +&n7|~ xdM}&0bLtV;D& ܏`q%(B}\.w5,E'1h SFIpYOҔ(mD_| OOSSJy]5Xu1M$'<#ǚ1r!YהD:k=sgy[;~¥A[3Av # KC }5?/m+2e&%h2~835AJ4R(h2wf1$^n7r Yv8fݯ7 KZ̴uQѥխKi(#ĜR] 3Zp53(*0b9ĭؐPfSs~Yߺ\XU:ա(kukto@b!3!-ƈaGCZ`hg(l>B q<݄;ySuy" q rhԩ;PkźxU.kQ0:z&2E1Ȝ \VyMZq !~/q:Zx7}lnzgz\bC0jB:` weQ4!Y '"G4GT V_Ж__Bt`.%)A,%2OuZ#c C01PP"4Т%.& 8hC991lKAk2O+F}'M;@|tLShiqy8W K9(ܣqC^:/Qi9C<lym/m׽lN܇4$8@)Z3J2l\AR3֖rRL&oE᥊fD5"ㄹsp M8a)na'so8{X4#d\gm(dm]8^fLVCQ5{kg]DŽV/k,zz 쎩S&na?yN-VѪu zm:z/ ߧ6hϣQ3z8ՎIcv OF"ں龎_|jFnׇhe{/ΏiNLWwLloS' wfk۟ ݷ6l31χO[|rubsEBSҙ>K/ji[=&Y$i,52HL-Ƚy"'&}\EH%\SCLPQ`T KmŽ(GK~gwx >qqH}{S#R?;S+`)R3ܰ5*0ݻxA@+Oڇo mđE7 G^acĆ*"HۚE!#)C-5,) Pe&H냠82keq84V))9ӞSE11]6 5rf!&l #bE:v#|1,}0?Mx}l7ۥX|@qrیY?)x R $@,e.y7fpP&:JɈ7" 2S'3E-~Y$eSedhښ"qI{ r[KfǶtCR8dDҴ5~baܖø>h7]dyȒycGOÅiynBcX"X۞fmu=-v~z)g*NtW6:즕vs<7M^;Y5j*avbMng6-|DbwvY2Xfga0W"B7ՙ 3־ڟ&j]^^6We'/zӒZ6ngB Yh-MO.Ly:/&/yøVGa nc hudZ?]>w-.w;)ִ,tE2h6A4'LubIP^bű CPeұXd\(عCǐ?(fQFdRyɝ)&Qn"72W4Sn)qv{l75DP {mypUIHeӂd'\Ϣ7"F |`]H"`nT֖أz|[Ru6lcFeEFeH) C($i!l\1y^6tYZgiǴ#ŎlvQ;c8&rVd3%+1I )T8p7 ѫx܏cQ =Idp%ʙQP< !KO:Rznyzj%nQk ݊`Ǭ V&αj†z #vąR5RV; Dx㉎F&(`r+8R-bCT6Q8Y',նSc1urLoet^1^=^ՔJvAA`v)Tu'e5zA* l.]q-B[,-cmwWYJ:w J rW &bUKqWYZ\t+/!]<`B2UeT o{_:mQ_{X\ vY0|7M&DqA0/pJ(ۇ߷.Mg 7 2xE⦳Mg))'tJ?N/*; n|ϸTqYѳ h_8϶UC Gդ9#!|P=۫%)/T op<`Ͻ~ϞtkuTu_9ɕI1Hdkm$G"i нl13bKvJrIvT 7XeJ\v&EE2'ȈCrS(|NQ(ePX8#JgU ūi]痖IJYUC NsyNA@ة1W$gi=^nwiXX.:q, _‹"9`%NFgSD L4/}C0`t* Y% K-:={#HQ@(]_)`x|~Valdc~oL\KsYiWlg p.\t)eߌw׫G$zsWQ^\uK_uD gͬDZ~V͝ RB%瑩1{e@s0H2Rۜ&>@xv&"5.S,'˺/oW0.%`8.[0Ԯl$|!%@q0X4 rk7ٻчhJ_OZlvϫx3ݸQj4NU+_+ G;+l-=:9HKXXHa!.P֜Q\ TJ2}0PΨ b\FE暠kЅV@У7O\uGR5!\(b(K*!t\,(Ǹ92kZgY o8jnOZwcDzArYx7Uhމr%K錭xHԘi[s@cjJb` =w=;g8gggg :4!- . k= G .S9&KM_.>J15\VdR: yb!X'LڧVg7g YfBwD~fؘn6e GUO8RpDm'i}2>zy"I܈#F j Q1Dظd'xi9DCt^ِOAKB( Fk$sU2BD0xXIH#hKXgjeu瓏.qsgs7޿zBnmZot͍/@h{ץ0Ġ11huC/Yp;:] l; \.ۆQww>O2ݣ畖d<5gguؚz;qqO}|~:6rcWApf כY8g8\"7!(7njlY#[ֽ١VVlO6l3& '2,X<[F?x)B-M"rD0G]Ȭ"|2:i]PYNLH $x`8{FADnDV*]r< eRP8s1Md0LRrȅ`I*KoM.r"6)5B;K!<'ǒ+sa15;1+N%!EJd)QΓ͖*ptSH,O3M!I_2CB0*d1cA3ym!DG-@Xe&l>;g -)֫vb4«ױ.XFa2>@3:#WX)2 Lh FeB⽎uH&]E~MwvlB[eobPx@:5#3yZFgNK\ i9c@a)eICfaz\uz=F9( s= ,a,P4J8U>m)RY4= ӿ%;K̳AT/3{J4 'C]b Hal5wSFOW_*)P9T(jD<"2'db(@r^8-Hϵ.jO8Ǥ\PA6΃ꞹP"dJgYr }:=gW=H䐘 U3&ΞcC?v{ L?υx6lMFh=;^ڬ$vfd0}FȘxec˒-$MJɣf6 )Of93bt<(t:z. 2DAr{n:IrdvX-qvk8 fzipyl/%VJtL1-W"Ke T,ǟG+CV#wtFH^$=4*8c2BC8tsKEs2Op|yY ۪٣CPX* F!X(wȴA>ݞт}=ooۛ ӷ.L,:l`&7Rwsh=Dp݆Il`-SO6=Nt@ ?hUYezI))%qM .Y+&Uh/HKk)J>zS| >n3Dy˟$Zb7f q <}nJ:>GF߽f>by:|YAޝXzxeZ(D;ӆ)v+%Գ\jm|v `_J޶+:=o 1\܃%F(gi6шN,Sy-3Yxudcu{/z4G~ؚW7ZI!B/ o gotS_‡B#PQg~ͦ'_\|3s BbF;42邌`c3Q2k{V[F6q2QEd Ǽ4z2oT|_D9/e:;`JٵL`(Gkl}e" !9SUwނR26{CIJgkc;ɪP]oNXj"B8t.*ku3]1=^= ^-꜔e0>gZxEy2p.ڟ)ŋYҕ?_2 5]o8|}__Ǥ~+yw|oe*y\i/R oI/Z $YA DTk,>aBTN# [#Hyuูll"ɞ-!߀^k~:~}5uo /jqw0[7MFә`h &< P֩pg/88:,E-1d'9\;Tz g3/ [hzd*ccFLw;=HFC}g9a堠th7-H-'P) 2L)+F MgSО^5m-\$}/#EUE)$MB%-%U9(&ΞEHKk]]o+BvX 0O3;tq৭#y,'d}-ݒZ1ĖVux<<Pm^4Kơ!F603o^.m am'>g%h[oo_hй>̣@í \)1$7 "9er횈|&\}3+bm}l{`ekpJMu9Dє^xFߎ64R^|v2!:cD&kBF&-8TRe=cgpdAQS:,,HuHdI#z$ b"]޹2uCe61x୍(bVz 8o|2P}spU#8ăswV":70k:JǻD;NJNSL`$^//uZf1ŻW#Xtv;{ɜT1`s{s}q_>/QȠ)_+ .ѕM%Pf[sYRy=cOdNXNDi``&,bu2e ^lR,\*fCJ*AH >tE f&]s=tU8tUĪ&ӕE OFptz=ǯsnR0:i~m)-"ξO(>KIRj\F`BN 1' osZV:z'(Hqe U!*Vܪ[rj9UȭjT!*Vݿ U!*Vܪ[rBnUȭ U!*Vܪ[R%R%Xˀ<թ7>\}bvȳDig%4 vwNVɪ;Yu'd՝Uws:B%Ft{sˣm0iJraˡ<}O DקּzLK2:'YAHļLYј$ {ÝY+ ,萳U.* hy嘒1L噡tHr«^oz8|(9Hts+TScMu4@+o,QHձvGu} Aj X2&g1)S&yrYVX] ^Ʉ=h B]THIm|&Pm-u^b>ٹl!Eey+{ EP{o=r9LMp1hydFrRD$g9 xM ӯ[ ϱ:w&uOX?TO4(-{@ӫ'QvkY3<;"^>}ren;|sLŅ,.=S]A/nTȢ\utI2- b闫&tFWӒ2MFԮwcjGǟ98}dz7@M-ߺÝBOP؟|(ba㦛X|g5ĻLWڽC-yGXZ]FnJ+:Ҕ EmX\^5Bߠ..GuO/*ʪ(k'vt͙"!-8iDCըk~(ԱpsK瀵\*IZ!:2LBxF,3E4hWRXkȱb~S`mǤG9.:kpCwc[v{Yd *:jք)6,ؠ `^ f \;Bhiv:` %!4LٛS Pm$TZ(G93+{[Зa],Kf:Y!,pϭ1&d^J3gvofQne!j}ölh5EG;F(Mvȭ)4s`*cA$4D-'xjt9go! @r{{axGsK5xe4V])ɤv>aT p&SrE=aSeRȅyo9bYVg.rT:+^igxٛ9{nXNno̾ &Ə'8rm;,٢_{\^ lffs{l5 F/=mWiFn5m6 Rۤvmt'3{OfП ۬Cji[o<=lE8d?ԲM_v}rƻ/މq奖f:rŻyN~q%y|q޲g5hް]L=9f}>\nϻPJʭ'A9Տ:W7rOYH7)`r\8M?%+cܫi%=CYi)H%mR^ĠuL$֐pBWkBQ'*x,[Ѐ6j-@jt2\{.1@V:.ћ9Sʉ7Ӂ-zTsu|m=PQ-NO+qyc|.m>1s㝕&nmDe4:(%ē.; 8D|o~nBz_nԪXP죍 kmoNTn>#V֜+de\X).`1#V+ore?be|}[Y(&I`UƈwoXٹQǚQǞQG1'1B,Zq>8IUهB :%r2:R Yg%0_]j%!6̞X>ԝ M2&>ѬЇ/9tT>m=OF>Tm>Wcf6_0Y^"_%oŵ{msYS򏯟-ԀYe5nj 2n~Fܔjg>lZzpXc3\ng -j{Kfʎ qylw5Ys!>8jP9(cwc)C.5 ~ i#Ǖ=Z,7,2^3˴V[Nj =فm׿>w=nݳ݁t[iyŠMr^:~{B_banҩ}S+pqgNPii.FN*z)Z>lx 3CQpi⤇7Ҏ'۟_:{F\dJ@ S!ɒ`&Y#r71"ϊ뤥pJZ"iU<0dW7u[7<=%sá6 >W |R)SuڟjoG8|̚;)Z+ƅmTiLʉ!H*EOB7H=.ehO -ُdHtD̆~{S?vT-zJs2y25Tr! 1G'J9p:qw3ZU4LĠ0Έ,VB9!ӖQ幢6$%/? ShR\ȳ LChY=skEso~rEsGwd_; fJ,$xڋr\ bc8wD|#-ѹ"@'1[ѥhrFST\ ǹ#I_vqIyD^k7̱TsM$r{0}#xIQRQ*mbVdf_d6*sX3G)/ qƱPօׅ'{YE,̿ ;N]/[6ߛ}E5LB*%- = uMKJ%GQu! lZq9k~cW4b18V#׈;yi|xnFc$R$xЏrZqb1@ՈFSBB!N(ʃn%D%EXGOVQD硰F,FNNww pu鬳9)zMNY_]@($4>=xt.[_7tGy}/gTxx_P^ghG/r>L@+r,w*2HTʁ}' S+;ԌEw 53!U@` ZüqSHK;ued R *TG#u` `A #Z?$`$Sig9mzK<ٟY܅_O 4})ύ!?Nsj?z `^ ǫ/W!uG5\#wTrwߙ6OO>3Pa CHЀ'ӋSOlO~! Y]ˠ[5qj4r0% j 5a<]ds^y] koT')9|Z 5Qqo\Mڟj7GHaZmm hyZXm@-σ/?Wݿu~GD-o,a n<3Nц^ɫq?g#GhKQ!a!Sq-["*7֋P0/׶f{<{J]pl+\k\ Z8]7P5"Dm6&jQCe qhҵM8zȚo1r` AgG6s3b`)er>(5P P B1.r?cTob=XB]8Fc]UEV'2J@fVDj;`ky?F2X !Aj> 5ں|Q1]z'1'ڻ܃dWMϛ]!UBS9 .ȗ̩Co[$Tᓼ6ū.ۿIJ&aFȝe8묑Cr(gˆD"~ k#%pO%`;limH>ۛR1+ fW5O b'_FaڹhA`Ⲭ vO%@ KgISW3NAQƤ= I1?lsד<vKtۼv;ö\ $ZL'wh(8z奵{ )$ė$&r y?ǯM aְ遬3ɛ@ix:+o/~!N/KNT0YzoN| ߙ85s#y|ޕCxKPBAx!u7'@~%i3D!+"R͊͠:ƣĊ3\ܥhLVx3GGJ828;{_)vRsmQK l0@:mϮ jWdAd9LT9z`@>0 *+$iy6 l2v]]e*A + 6B'2WJL: L+R>M+%5Y3Ff .x urb3lq2*""|}`:7\n7z@7 0JJ}FP%L 碦3OWәJMz5մ2jLm8oqI3jҖj: LV? #kG~ǂ)'gܔ̾}5|P\E+^)OlBeifئF1/5q,hLh(ogQxKԧ8!4EbYs;%0A(@W"0e2Y-B#Ebwc-~jkU!n·~sK^n ReA:T0N/jS^++).=ye<'D1ްeOY{n^=^e ^-gpO;|T9XR 4\Z;lQ\ bJDZ-s$-K܅RH#3ǀ;m|.!d\D"FsA!toblҦ KX|HeJB+p pݯjA0\_ؖ;#/htVC#ZS#Yb!V"( 9H'DTڄRhT~%޳U6=+ sXzȻs'1&YB@`̠ z)*:a, ;!-G!<,ц S:߃eq˽I,F,G6byJF`S, *]m׍>\YReUN\Զrho'JQwSǼI=OYy!(*hJ%qбN4D+h Ï,BbAXėm6SOmZw3?  F8X]aT=).5σvw>X[g)HjdosYg7 S|;[2=:z!tRm+?)Q#' #!æE F_ƵZ2W?.c O6VjQǡN]Pi1ϝն3Uc܈2_]&ش-B`p׸ȵ?ٵ|On;{2ރԸ><{y[w{bQReaT@hk֊6};6 G^i*dHOG]5u̔k٘ =>XD"he:%* *7F+B` `CN{F =/~[\[,&k"z/$1Bx6G/Mj)9u)]ԏv{D65볩zE(첶9vu`nM.jCEho+G5TʓT VSӻe=*ⴙCPf"D 3V򊲘2L4FO4>B |F%h+!En̅HiIψO9 G#pjꎆR8>D$U>wyA9eqG!=s6-11]x=D2 Ex鼬)%gXKqxٽ1CRv^:Q.$&0VkbQIN$8贶 Kq\ E᥊4`.x뜡DB0*QEnpR9& bL@ltkk#v/TRJ@ WLQ!,U*j>gxlI.Nx<[ޗDl%L20+:'SXsA mJBh;<-:Rζ' #9 ҪlP(I!F-l K<.Fag{Yf2-(x,OQ<,$ &:BhPA3p)1@ eTك7^>!%Cch Y)pIBxffTi#>\4o =XzDHPyhcN3e ںT-VdX 6@uF(4T-W]-e,E&e6!@8lHBF"D9b9 qO4ɯq/ZUۂVeǁ/tH{0 BDC5ތJ3AQsJ, 8 Q*5hX4{579D/}JEv 3Ô F@g?,C֕}!Ox[Lh#hS9gVaS}tt4),3<)-F/,,`$Q2H F'i6X0`]Ks#9+;bNsqbVJvK]~T%zdJ,vTےNL#>JgYr }:EJgW=HYd}f&T5q޸7 >vwۿO)Bu4s!^wUu#Qpwh0#w7_Nホzʽ,+9kQ~^8Ok]Y f˚49IKSR3KvS&f / S)?7o8{]n!v1-j;UfI tj?DNGΛ]nn2a68|n/lX(l[O\M|bڠojL;e*%/MQ-25ۅ4bC׷ Crr/eåt[d|LmY!SwO#:;L[fFZxs`cu{G+u|r`9 zi$\r}fO'w:*fuxj}OdrA"KK LzȵfB0 L dHEJP1s&r,:ךW < d7VH-Ȩ :kʄ8MO^,a4}iS '@9O9*.2>ĕtrއ ǟrѨN}pl__?F\wPle 6DrT^KDXdv(QR#MlN)@{ eAFm.1A[L9gXM7|Vӌ}VB۰gN˚ ~vw),5Alg&J yi aV18#RT&l%JAc4A  ra('dBhT0H6cP39bfB͋ZFp;I+dҎ}Q*64n6Il* s&TN, $1*rT$@YcҊaqE,:J5 Y4Y%9Q)kH5Z_nP5qvaO![OǾ*#kq{o4! ++!26琩y8cƒaP+3\&D1@J 'b$&-r2UeD&~ou \-3Zg5-%.6\ܠ"`Aފ(48 S[2n}d{+s(1 %Sjڱ/x@؎tv}s5,Of"};*C~L8`Ɖ50ɂ&ь/ ?~LuHkc)Yfr BtbRyH8 X%), 3nW<9]pd4 2h#"t& Nޅ20^ӂa22vk2A=.ҁ=a^JL%w> 4϶%G>y3!O()r۽W&W̟z"93~.䊴Bz""b-&ƂgWE`gWE#k%"i;y*RWW2UX"Q NHJ6 n^v:/+KR;/N$e~^N^8.&S:9e0^ΫYi6$vVC{UmWM>"e$i~En\ڞ \i"%W Wvuuegc5Ej{r*wUYp46=D'ӿi1WMpdWJ!`_o]y|4;zuI =wEPL~A !Zuo'1+=+!Qi]oJ5v,v6Smxk7#ؑgR*n>_׿^KF_d:~wq8F 񖯣;z7R-mhا,BE},?3{Cr"s]uYt,ExSV)xʿnO%y *NI:g"P^,ҶA-v+JAdP+JAdP+J୵9-ss6[E\q6!EZy!EJv,/q V gmU8kZVU8k.9X ֪VЋϓ'6&?oWpzd(BSruBeR772K:L2}$:ᅓYw`+&8NP"dJgYr }:\8#衔Hx!VU˦ǎX_A,+ 4s!^wl7u؞c=߷:?YYY?7#eL<ː#܌RIJ)yt:$7ơJ:=x! skHOF#8-xuJvu~nof>,D5ł>Mf@~tqEF:2K0ņɲv|uSAi}1mz.{*A.enݜ,W2. 䥁lئeY`0#?,d߭7-j6Cj7|qǛ].{Yl n|a58Q\j6JOLTWiL)< eAJm +6=4z} JїyCSBK pnKHYH*pFtv`N >VH4L'+wߝMf;zTayEEj/ nnQaf&x;BM`QƽLc勖-~YwJ+uwҼ}(/'3fmp[iCiG{:F\ a&@sR0H' hRZUIiU40fb8{gxF؛h펧WQEl p&ݼowD 1ƌNE꤃_0s:PttW"ۏ{3nk]#00.]g" :o}/Qsb/3L3ժ./D"-G4: `;!EJ!'g39qw; T :g(P9\U%ZA$K LzȵfB0 L dV*UEJag7dE$ּJEU Bj :kT8MqcbqZ{<ZMOΝNǻ&(VIy3l: N=-U{r**ur@sA&1!Qy-aqBrJFϑuݢlN)Jm"䂠#D6eD g-&ÍDR8q!XXM3B[ mƒG< KR 랋2Namg83Z?oG'Gl(]6)YLHOR (yo)#Q ,. eEPO6Ѩ`Tɘl&e*gr̢hk$˓x6TFmh]w$b;T!2QixDxU3 BъR/ Z%`J@P3:cPx16cëGtP ՝3LgvB' I(#+WrK }I 0ǘᲪ[D^j]JEAezc,F¸ j2|>r| Ն剙;k: 8m TeȥЛ^շATx͒cŴ,>ӹr ݬO 1*P "S HВ<&:\L%Ȏ\1]8-'=EճJ@ZAjZLec%YFsiɑ"$%hjsA TAi\FD}}^׫ۈ F뤐( ZY&pR\K3΀C\cӈ84i?kڊnK+^gBL 9LEͼ xlж)"]Ǟ22WL֡5SBL !;TYs+]QQ<3Dfy/»2UW%HҞCL?#ɮLTXJYH(#cHRthl̷,CJWM&v^=\Ȭb;g `[;Ǖ[;l<]o;'1\O7`>_5!Tp:?5ZNpi8w{?ynY }'S"+-!]{NCMkV#߳|Z~X$(kwuDȱ skw\F>cGͯ?7XLh ~3n0?f?*Gވ>goG$/ frd4w_{LoG/|"9{Aa m?-$e3aqWF|f~[;mrv~A.cky Zgz͢1hȒ #G<2HΕ.Xfg@ ڨ"QѢ皂*獯bp`>2HC}kÔtڥfeEmyc2'\2v{"v=1osli9yg+>Ds3wGɤȝNPDN{OI=1 h5* 9g!&-y1 {Kve ,$gi`]TN!rcJ0$3CA턽^l3rWvs}u |7{n ScM: u$$7()% KVǶU4M(eH2Q# Ȉ6] ^dO,z>,UCm|&P]{gHatv.[He\;o)☣YJ{:eh K,HD Kf ,"B sJfi@McJ/~]{Շtbr<(/-rsI|L2LKg"*( \3QSːj]0\Z =}LWP[zqٵG]jp{ԏ BmήTKdQ.stѻEdz4݄W7>i z)sh^W@],%Ged`mK}:+TyU瑾EIhM Ct2L1${LeH˸Ba"dž` VDDӽ|[f-] n ߠމZduY! *y4ф)v,ISTs#;X:Ggm8@0;wtQ#Y&]C;"1{xJj¸$A  KeD_,2+)1 sk Ldޮ6 8ݒҩ=8@+y}b\|ݠղط}e V:Ĝ;۳h|xidRg#rBˁPjc4V]hl)aJ0듌 de8BOu\I0)2M)zɅyoCf1ZȑStpTexYm8[Eez/B~ܚ}솮^xCڡMKIw Zuaߐ [%ZwԺҺ^=tCwm68f!e7]z|xs$[zy\_oqu3o#L}n8aV$_aS9u'_4l.Lݷ9f|>$oI*[O)Rkͤ .5[IA\FkS(%g-wXl9-&:eer}}3_¹3pW=iw/doH޽Z+ƅTi;B@h*6| rcٽQiLC^YI fiy`Voɳ<ɳ<ɳ<dQxa'Al$϶ HHHH bփh5B#y6xHHHHHHH'͵V&mI[iV&mZiViJ+M&Ҥ4i+MJҤ4i+M8 XˆO|ep3/`83gE;[FJʍ*)Jʭ_*)J-#LK[%冣Z%VIVIURn:>~8:wY"5Ru2Б YJdb˥Zɪ!`y1+tG9*p]]ĻLҫٓ˗H&dLn/ Gc79,;BrOAQ))L%f䲈 '!9#7sU4G2F2}weWиK4L.onNn_9 m:'a?0d}z_ ']+[ *(u,XVVmZ* P耴s9$wjMqMA+}l!2˟gKw+rբ3Q9H>Z4_wHJ:a*Yi)Bh&+%= J[%DYEoUJ{]+0Ջ8?*l<|ñź1k1sl7ZȘmh:QēT"p`ɰMy(}Kiz a-ϼjc爗PIgmoN o.onL;wE%M:y5Ƌ QMdcD'w:Wyi[TOTըcG5hQ>Y/TRC8"ZΓ$##8@Rk8]E&Sn "!a],#zYreVRbYBt!Vp6VhHiǥܕ{tA5だNAA%}>pA^` #GK'_|Xr%LvZymB4s)%fA俯5 .PFUD~1\{mY‡h lT.D9zbF;i=Kњ3xUx'΋)|,kl7Ͻ:!v<+6@q;#r!;s0䘍fQs+ bԙ%3*lU ;ce%ٻ6n$WXtW@w*umRݏW)4lUd+J:[߯ARXLC0+fdϠ@1C)H)Ht%[[(r'=+IFw4%n (Ϛ?ֶד+DbTW̞#y=ű~gơIēM[FD )\)Pe)/I8aR١|%J8{2ޡĒLM=HLhCH. bF\5u6 b3:^$kLFb%-XhVmj1Egtt8t( qӭaO4ONFŠbqՄwAl3`f )`CɪLlY-"wU[tYήfM( u8 U^rֳӺ+wSI^'rr~n&˛$w:{zѯܙubRoC<ʻWL˂U&2V(9$Q$־ $6^ Z[&@ -(H %8_[̠wUEKe˘uhd&ndUa0 `,3>x!K9^NVr׮ޭvs2=>Z;ccei O5AGAPf -)Dd &Jc^`0g@谉 Sf27L ( EnE)>s%nΧ'9Qڱ/jfԞ]%^5Ԫ)PpٕBJT4?cJΪuò^tA5ULaPTRͩTONevOƩ 0`D3"Έxwhᤎ(A.Y_ĂD141Jxԏ-$Q UkPD^#L bVUZJQOcV 1ĹG+ѡΊ˪SZ%{"g\Ep0j䡀 q5RI^Z*&D&Ռ6%Pz?CTpv싇v0 lG)Mys5 9 rFz7G?>Rc!NJtxyRN|=6%-o[4e􆗽F2R $3;>Pcx\]P]D#E1Ŋ@xZC3֓%2\(%6( Kh4@RVl֕Xlf@WS̕9&m4%}8qyJ 7{Ҝ~sٮMrxisw5yA]m+4|u9I<#ث$0XIWfM?f>)i)gڅ}&gRyg??_.>#= ̓ Z/?؋3Lq|?gI}Xi<=_ϓޫzjkؠJeubR[vos皙={Ͼڒ8=}JYO%AP}B$_YHyWg'(B56l}X97E_l^ˁ<جg/NV}?ʋhzU.2wgWZg-]gw/yASnhRdc#ePcpq@J;R"!"2`b1@5E'Ĕrj~l/k!Jbkb>/ E_B< x<|H΀J+I A]O$T{\+TwD6,VOX;cO:J519L5,>I&S w?ޝYۿtAJX[k5|6`J!bPYNQ@&gCJ/K9눽)v0סM.^SJʩRbmTmeljǜvלi1S1Qm3 Nu7X9GzKj^"zr55K>]RpĞ2GŢ}&mnD~ގ~hNwd҅cY: AeL!,[f i'-Asm9k'E}M媎uSAH3ɀJJaSƦ$)c[ bl 6軒clBZ5lC]m8e}R ׳Β[ b7۾.gGe5o_GԴsr#huc<`=uiץD`F&w@p[.,Ҫ+ ElT`UUKIq'W.& \uiaKy Tgz:p޻~ڂ=t[U^nk_,V^jKq/07-oZo.4upqK Kk/ȿ 5B!Lw9 Lwq?҆8uR ~0]p^A/Ώ׸lþXOqTgrWɋ>>ZER.NOpz^Ώ*"cF_R15B=w7H[]hMMjkr;A[gU~g|lfyI)2ږ`bL #T0L 4άE8YK&֥NSU JVu=ti:\u)q9DpIG/ IէIiӤ^xy pEW4C>T>@_Թſ__ll_OC\GcV_o'gjW/M.ejKj,.2FOMB :~ztX)L$@cP)jU%m!4Ds!ycixͦP{ԫocgc1.qQ q|KNXu@9恊K!LɔKHXXrlO&!$\p1#U_.CzFZLtؔm; l,ZRf}تM-)>NE&=K JN^)ГV3 ֋֢ VzDKҳb\Z1 %C3HG0eP;UIPlj$ad97%nCgTNxS0LOmʃpndxƛm>c5M%dPrHH( $Zsz"#jm9N6#B 5IUEKe˘uhd&ndUa0 `,3>(x~љ󏋴9˶VVe񇿹`O}<_%\0ޅBapF=T8c(ތ!8(l"F*:l"”X}S΄؀PV`&nΧ'9Vܣc_Q;̨=3; VK+p1rak7*C՗̮R6xTrpV-% Wb ZG(=rmA|Jp8pԟ*ggGKtqӏ}1F8#⌈wr褎(A.Y_ĂD141Jxԏ-ѤPvl1i8B[ n% [%<&os]Iv8+ѡΊָ9LKE4cqq}3.θx[5dM.DH%y5njlVĖV3ڔ@=J;C;~xvXwK_wsy# lSn{.6 XWռG\W󮺚u5?!QaǛ=ϟw> sV𒼴e_fx}o"-sÄZI;R4S$54c}BiP \7JmPji,n b\+ ̀+%rJ_M`m9:0Rh=J{^*Ks%gv?vdވ69f 2{wכfS:#U lm+J %K9,~XO1xGStכsM%R:An~:u;LYc tIQdIWڨٲTE 昽G`jLs#!o7~_=D [#,wTdh`0pZbJ`95UJ:siLWNdiNgcaQTmagu#9&B%T&5L:,H8k >޵+bi{:!U$+<͞m<-x5qd$3oQ[Ȓղ{0qb5.vU__Q_Rkrpb:\TR.V{EAe+x\H攘e)0C:);_LN^m QS!S|bH,4c:[t<1gbZ~\L/tqCL?[?+לxbfFfYN-v>ڹ5ڹ?waLNOu?m-CǸ@vuE3,{BZ;G>b OM=F <{dhTJ58뒜\Sfq8 b" 2P|bUO1u˜=+.J]M{+El&'L}3%8('p7gtpn_ɲӾ_:4떬s|lCw&jAԩϏ-}zwPC,DRgt(2Nπq4Mɝ)wIعjt5yq/Tc044r R3LsRDi΂P18^A~+ p(YUrEm+VH TQ>Up)Sٞ't9-1|c-ֲؒ&'M1B=ALV\sQhӲO=KP%*ge:d-2&e P JsK;x){v1א#y7P gX0[08 c@kc!~RBG♖#js 1F,ʙ]զf-m`ul7qv<`,e_u+l xift»w_k!mOtn~"Iaͤ^ҺݢζtzfKI|i;얳unzwzn%=|x>\={ceΟMn磮h_q@IBamٺᯚz6nWxm^?~?F|/7ex)~>28U=DXpp4 TP-؁(B&`I_'rFWӹA@9;gKBUQ KN"e@RяN8X {x!vGDu}:FWiq pv}Th s&* kuI2p9x{YKFX×;RMZ2x-(A{ɿі; q9q HN".RU#;$ou^_UEe3v./N餇M(`bہàbCNUmRgmڄ c_ms,ŘW :db W8s:iq^Z*9[=YSP{IA=iqx7yCpL G _SY*?oGOmO*e>_C:+ܻF@>_ڧ*[)v]t>uɩPJQ fJH,ҡfF >$v(p!eU5E*rT% B4Z4>!8c.-)g%ޖ*!&ΎY1N?z,t~1^sKqPr],ڬ*;M+j:/j0&)ʮPچhdR,Wý`3iš,/L)5O r%k,T軠R_l_7I&kƇr)IѳܓgiPϾO2wtg?}4@,eW֘:: ۫瑴vd'89ڏJ.hg@q@O? T}* R(\ԚɋIrŧF__znӀj}NÙi˪^K|1In3_\CsoC D͵)$hl V40YM x 9Fjr@E4Bi<$G^Cp1RV5Dwlvv&T#p"1>T|3~ O';?;nmxmCZl',Q;8Xc]-)K9 aNAFs&j_}UwA 討svqyGQiG-Rx_R,& b3=TtQᏚzHmߢwF(O{+5 (Jz*߹KF\)q~W#@zg}\Mha? mQon 6]ȞRJ-e]ȅ%Wdg.bHh`Id=ۜ-6mퟙ=+[,8nXoI}e̐N[4/~4wߦ[iǬr=h =SHW&в $G]1znx@;Q@{ՙɻ-e-!p&KN_;Ҵx &DkėOŗ?O]N|&bƬ\Zc^ f㓎}/6)C1.ծ82Zf`ݬ\~;N~q>:;y_%7TL@hp}q~S^9e@!1=6>&ׇؤe:ӣHI FwVFKjiEJR#1{TWAc< u%{F]5qE]5ijR:;TWG@Jfu8誉 t(InIiͨޣr_p[)2dytsOw9ѯk{u}'^tG>(E,OT^;i|gKO_~j͟n{{0Z{ "փ߲kR;*f]NNjY_,Ls<*r6MoQ"2(?Φec< ӳ$ͮ榓t3_*EQØ8?3.m%tvvZVc%[̅SD ?b`30X+$s^QZFyn45u"mv429?ߖ~ v-g]o|H :4*wF0`gd$K%?aq^e=äY^:ALt *o*ǡo:u(PIfPQG"k񐢻M`>pK0pIҌ+GyVv_zp!br,ߓFi{ef#͇haѿoW7LO`nNY8D|L{X #.+ek'ޓzkQAX,zŢb=Փf=xEeK{8XUJ'-Y.>[hSA(;6-vh<q2J9dN 멼dEî6WTzZ;çͩ vt4dIA MBI5˂dЁbF>1PmUڛ8;R_iܩg\oMIĤȓ_D6[^^l/y-AgCW7=obt܅l"aX %Am1[@fxNJ  2Ttز|ХoVC^[Mypٻ6vWmMnoz63 0#;\jWlJn]Dzn@E0ѕX#hBr$ZG-(Dv'abp 3E$|}eݫDRb^əd}!&;EhªQ$O\m8'oIڏ4HCFV8Zn8hHFVVE(YVL*&E:"-N./fcW(},{`R*"j!i jH]fh2[fFmB{~qe9ڋ:&آ{G&l/%rF VFOTOd♼O72{>)%[gRZw^Y|ы=/Y )ڢjrSc]i%2rASFB^8uFl[ǭr&H*ҨF^dR!PYGA)!wI݆GGiL\y 5|#l \x|,f_k7ˍ`58sq+#Omγ#񙯇S2=X@1U5Xk0HTFCq.N%n,#χM˃{a]O?ϮO?}-8Ok?wuf(]PChҦm;0u{0w?>{鵲*gt~g][1ʊ@؞ӧ ٗن6ۂ׋ݟ՗Dѧq)V-/FoWI6Vsہ-vBZRs_ɒP׻B]O>Sz X Y$@UN<&6'rĢӵj%p-s۔ JꃵJ;Y76I>s3WK0v}xp},nyC=V * H⁴`-_p`U  YbQ5J:M80rfrϊT6UΏ(UwJ{YqR %jÕh1 X3DDJ@2]CVX؄Lz['UCu58QEgTsCT.Mm8CbzL1Wkn ·Ax&~>yt̕woEO1_ blį!rx )YxHxKc5'Se9C "eq` Jcprr8+2+زaYwݝ/^puq{缏_~9n?cBzGQ>!po[oDs5U}|w:1v|i~s|WͭQ7ΰd8QkCҔQp1u ʕvWg8#l1VчPFEeOKcCGކRx(z.&BUQFCIHc.BZUbP^Y C1nl=./H)=LuR%9[E#۹6S9'B1žS)4eDg\crd!K>kMFղl/&E0:BPn$)^r519x +rP4l2uURĊXaIGw`n:(Wce:Vل VhAR-5$*VתA< :0V'=؈tk_A>v^tSk͠i;P-6L-6ܘJ-6cE%ZO>`-*@+va'UIC+QH &hW9Pʳ}ԴU6"*Ԫc90I'5^Ӛ1k5қp(s-|d3H{?3[w՛I|6q,`ˤf?6gch9Wgt|:#+L: EբDC ACU%J4 9.:>&ӳLY)W0 aFQ`J3vP(Oʘ@^S dyXױ7 1pxZ_n+n `mv*!w?F`5q>֏2ay4JPևb?zv @Qo,WtAvK=xSe]ȅ%;7)ZYm`mEt)׬Uj/)ARʹi#rk<~p4ﳳGWԦEO<yH䇭;~x+z hCZU6ܠÿL~ʻ챝ߊɒsٓ"}N_0xA_|.\޹RZ>l_xI6'%96.4ZJCFkJQBP);l#e6!Cv:D(>9|r< 5+QRS5ɷ^x +DnN ĀH9z0Y.VI`9_|Fu!GD$j 3OQ\plVdGRz3vRj 8dÏ_ze08F;dkpZ(8yR^¿vlɳrE/xg[%0\|ʦlb \ 1ʪ &^5_fLk |g0\-nHknG`탯5D먦Ee8=/mц1 Eq[Q%z{+nr6D &ۼ_;k߼7wlgMz}iGWR F[ПzQdYv$GiEnCHF̃〖L)^ |L.kFeGTQ@RY;O $  G^Y#yqg  ȌW 2Li`I;TEbնƚ1YL>M gOʚ&{DPq#-}>ơ#YփT -,+`CAZ]9(K% Z"&wUTv*&Rȭ޺`1 ,Jtęʞr JgE6݊ʟ(^`|6Em˛@&'ofQhCTo'*2yRX&HrgĢJ7?V\ܮVydŚJ1[RsrZzX@D5geV1=\TS6LFE*Fe6ݖVf Ismal d %yv! ۋZyY:Yo x~~l~-v Ŧ9(AEi%u!-[#M)tQ^L㬍ʹzHmhQ6gcM36\USXŷSq] pU%BhDbF\%ϵjd'M=XR& ƧJUR4CXr8*V[ώkjKEӚbi'^t Jb Tٻ޸$W?ۙGh 3Xlb0#R[.UbDQ% d2HEfΛCwrwW1XM1E#Ո4FI#Nq{ ovQQ%e12ETAe%G\Q¢uT#RJ9/HñExmGhhRٱKBIzALֈG )˧N cZKyOzGы]P J! dBJ:zS1YZ1%I/B/EՇz`}vZm a^#E?~ďޔx{6L69vK;@wT)#\2h::آ )@aJڕ;>4$Lb$K UiQ1"Ȕ 2$CE\uk[1$:m3e\ -RV N|[Gw_PAGVmynۋՕV]\삘ύ0O/!h9f귣fb&z$j48^] ZJۿvBsd:Z@q b:F)D`3h=$SvܐQ  f8< edyEђ9y,RdM\B޲ ǘ.e6ZYm뽎@#U⸦@[̣`\cK^_*gwe3Zl'T\k(9˝y󋏗7N^|nXJYl7|>ѹrq(1 A &:1pmFDcbkPVPjP<;1r5^]>8oy w*Ʊ2d{kjFc-T gjR(& [gty(>ΞeֽZ6;؂@V(Z 1dUGI Lۉ/K #cY|g%@`BNAp!T :7^X@KMoy .쎩VxK!r+i`tbBۧn kx}n1z:g^y~BUz|3x˥=f^./.̷:=;&./0H˼[NYsFl踹o<:o2R&H.$ďw _o8>Pj]Z#]C3sPr#֍ܳ}Ta=u3msQ?QV[:R˱V'h!5.$g °T"j7X\m(!\9euיTu%nߓ٣2Y2G_g6 R2puIBP)Df)|"QKCA&IQWHX@))&^kk3} [#Z]kU&==z7' X2h}HYs Ox"u-"c%-Dvr5ĺ]*blnRƆD``9kuVÊ'=؈dseJ>}{eY<-^ Cu@ru,"_x):td.?LŋQKСQ#o!Icv yJ<ձ>Y˔{=Yoܖ}sjU^1dOQNoTEo%rjJ^5OnRb+'Mᗓϧ<;q>~_{='޵>lCh-,pt{53t?Guظa,'V#G~5CŇfFoE>o5: !XH%r? &9TUgIFrR;PD ~QzzjPքvl7;+~~~@׺qQ E`BY}вUlB-uo;ve+cQwvu&*%0 ugy"d)p66hsA9U(f` jՔ(xS˄Z9;YS7ۤ[.wd&cO-ae|wXC=,wç-|ѤEn:DjBM N\: jĠiq*+0F0 ; ;VL ZA k欂 Q:TD6rs0l(rQ PE*Vh3BWۃ;N4twKC?- 8|p~q)+Y;䄭oK<h1Gy߇iբa\萣钁9OLLs*yܒRU@Gu؁J TP P!f)*(ȹ K=Vr)蹪K s${֣5lXU>i\PQ<4KtȀcKy>m(62ԵgUgO]+;eYnn7sc9jnӉֳ!EXoT]#F|cLPeAw޾2˧װ_?CvROB%s$ƀ'}g̻x.EZ<mLLQ.orY%ԟոjs{5ټ@)!Ôm>LԱXҚQS-tւpmrz*8n|W !=2=z#m}n.a:[?}8 ךԛa#8A#Ak_}X3p_u]~y8bHavHD]!w٫̊LSÂaX0}TZmb.T6ONB:P ЄVjtHc5œV7DEbFgms/@lKhb_<$cUԪPCov\t1fmaYyߞqrE|\I5d3i1[:]LFL TζX|LG$%":ՐA{"T58tԡ- >,J]]{5V.abQ?_1y,EԷܠW2-o"ڵ#[ިtn[-7x=|O^>׻Iy[_gC4hXRuXSr0:#Ʀr(SbƜPI+)zgH;8@rIR֐J# \E9ɆA#1ɶTMY-f=- 1ǭ,mB[lm__@\5ߜ,~oRvӻ[6Ԇ0ɸrk0cv6֓k+֕hM4A4B%I1I1(!lQ{*:8]Kq`yF{ lM$÷b*GORIxX577m\|>ݣ6݆1MwȈݴ7SԚ/o</V8_9GDWmKDXGPCU)!D"SrF\\^IlT=4 UɄQ@sarI?"h-)W.ɷzD4yw*M ##k/w' ,XXRٹ09{_s0>9)8e}/ E7=@79^B7^?_}j\qFiczS3&["bJqش R8W ~[rcD3-ñ9q/+~_?LrLqvvS4 ti P4kw.fss']1\FrIʳ.V]7}жc2S-"n`0h<~rXްygX0GC _>_Yknxq'SL‚E'Zv)YE,&dR=bzȕ_Q$43X>\`g?mm#$[lIdR;{Il5Ůn!,:0.I}'ታvkJ5k`o w?{Pzk9TXU⻐P@49ǔ=ƄE57(F@x`,ߞ~[SmJC|c-۔io(ϷuB-ߩ /l=VuxCT>o?=S_?._\o,rZBͪJ5qX=7F3.m8ph+g i'#u}n쒢mlyFGF'rb9FmÜ@"f#yT~`"f3YoRr)i2mJ> њeetVym 9p[Nn886C ťBbNIIF {PfI$  $z@!~V:RIDګai}4-)nZͺژt]n'mPX[m+wJVwVгko}~+EZOh,'o|@1pJB"be!@C遫f5\Lz 1K.zRl.EYLk8W2Tؘ996bi MIqw_iz5ywٷYXڹ Tn4| gwoV4͚X`ӑ3RP.J^f#SQ<fWY ePĞI66AP&chRf`i2g?b8H֬4Ǣ66أv`wykbqBY rdAĤhh* "9&R6aVڸQqicq d251E+8FǬҲи$n̜xXeEƶ bcq,"چ# A; |Hd6H.Y,k#C:6Dƣikyh-rƴ!ɉ.Q:d%$4'Z!`3:0"6fJo5'\-4٘`p .ysrF*L*ZP9zɝD{\| \l 6!oqx gS|W=F 2xpa3E?ZAxc=pq='&>U ;-W0]@ U9$>ٛd-߈w˼t6JIzfĈquDì[ɑ$z0B]:;QN-K=OG ՙ{!UtGp2ԤmU)}:nʜCMq xvVg^޽1hh\!|kf7S}w-^nN`jFbzz JGIW̕+⊵mOWTO)Ft)e1XU1yWXkmኬD.zzp6%ʰmg\t9pX~E`HϽ,\`?ZΟDS?7L'w&852pһKR3O>B#wQv̍06tq$a&l g9Oz< lyzdnRTQV_=CVY\2NOk=26bO٧-XNZbbh9_>^HfVI)}ʺr1sqZ4Clh (gp6A n&Vl €\QuWd-+9pj;W'U1Wuf%X뢧USoM/@{m eMphz7^ӥP{7*/bu۝.7H25g{/vEP|v~~?mzn!3 b.ȮU֘Ure9WH!"M%>"" +Yɥ•4j|˙ \kj;\+Ap S\5 \s3pEs-UJѳW bCpJ֊Ls3bXkZϮzpE-+eE n +sZu`b $kkǿdwFYwBҠ @u7ϞKoy 1ۚaoF}EH+kTQ8.NqUK|6Xd)-aPy@:~'y~;W#{D= J*qf+%oV]Mt)xl3IE_+V^ѿGc v`k:Wde*ֶyX)XWĠv]VU1:3WU-V]26d=j4,ҕ ntVܒ._So>Iѵ'_Ig!ׇ^;YJˠ+>TVP #^-Wl:].NxXX=:# C=w#K럿W,[j^$!Op1X9FKHSK\ˠdr,Q1j [~-I Iӽ9e>^úA_៓KѶdm}P{o?]6ǐһw)E rlWQOqSCɗ`upՄ e 9a4"LiD698ۄܚmɏO/K<| @pR:A-sZ 7q?HʑwN"rHfgA?om BF$G5)Ƕ2%alJikvV@܏b}"&TBM\YJ<-:C`mz5wR}i<:ޒr< 9B=K"ϮQYa~ ˒/|._G g{{ȈP+/ %_ņat␡Of+ya2P*2{J~/10q^fWz{ePXbX2s! ;U.TlEKmU?:^6ea:ۤeqzu%fQOՄ!$@prT;A`c)TqIVp]Ŋb](%\H\֥ȲR" \g@"b{C dUٜ'R|%z ]=OD({&B>ѠGe=`],+2ސmx Ucxy~ey,q+.K/9ռT,E+ɫ8z27(iwxE~5z.^Gs.B._8o^b"!U62YD/}UT.zzJR^K"z[A/hEF!!JJFKf}$™,$0#`Sl^\JZe =)Ӕ.} {hM2GNYx^ hoX@|/[&vj:-[msahs}o*: ^ {^‡W=+_O_F4m6 ]u gdrx[EEE+*]?)]Kﮞ^x҇2hwͺB*YQjᭇW>ԼRr?L'-^2||C|~2~{.*Sqtd\'BSa5,G Z*S|y\ t(}ULڜj>0ɦjQՖϫKC3WG[A2DȕI (;Ӿe-W} 'EV6lfgu()ABt"y֬Y/bRRIy:eb;l3uQ;c,SBdz!\7,3g{Rv$l&a-5b0Гzv'(oKU|W_ ,)=G l;^*`" .rf1 gƚf'A*RB(PhQd*阄h';xEh4f΁S)HSSTʐ(2)y4ٔ\ A Z] G H)&SlW<>R驻L鐘S}]$/< SL1"C&;Heqު]VݯjǨQ}kSI̅D4sP"3 $.dTZqV(Ӑ}E}lK7tmAtK"ۧ(W?LD O dB"n(+/4gOɪ=yeyW>i_'k>Z%d%-㌐)ղI ä2ɂgudƈ>NqY#YDr Be@IQNt\At]h:1s+e1plʗ-+/Uy)A ˣpZzʳŸ2L|Zݯ1qYmh$l)>44ROTkBȇ];o#wB%9xZ\mv"{M tݥb׎ao{}s&}DxϬr$ldUr1_yE2u`ݯ'Tww~wvp qI zE[KɤE"jJ43险:}lV H!2dm(1 %&L%3;I2H,  @fHl⹎BĐ o|Gd^P]<0?d1d)[%\ф(((. %2DeT9!-3am`E$d 3f1`1uӋX]hr^Nv|n [)wx}u6f1T<ߞ](̶\*cIm f"'èY(Lf_+_uh-zB`A`IIQzMN#+UI[~v ~g}5Ql9א#X6GdYx]RiMrZ$RjeƋ4e pDck1,JfA!KpB dXeK >qVXrn=90ߜnW?3tCŋ7u *vۋVnQ |ƚh8 !gs dY$ Qf!x[i<ӫ<@FKCo*Q2!BV&R`O#?AVky OΫܿY:;8F #g%j"Ă(DrD' HZf ~b'iQ9.ؠġm$Rڲ\}R  |b qAk!)9 I:uAuD4LEPp f0G]wۋo\^|B<+*xvB;11Rט'$σad~I F|6 ׸ysB3?natF׻nž]FJ/׿iGQo4rjwTt;4Md$]bZWn^6E}̆'0n;Ď_vclW'+7kou` jXu+cMbը6|_WyZ`RWҼZ^lfd/-^ݸځt)& -.<ߖ5l7uf0&寮*6ɢ^;㎝C|AXu-[c׫bk=ܴ9~3,oP˳4t2+=SO4* {#vՍKf tzznKh5t<\8Ϭ;{om&B{'K,!t1vR Z[}D=ӵ>::Z:;dQ@)^ZliɲQEI+kDHlQ D9cNcƐTl)%K,`Qy(dRg7NL&_#MzY2easɨAG>?IbB.99AMXŦJFDpvj%$4le)$J#]9z/ 5VfٮtSiZ*{XjswM//ƛd{aꍃŞCnG~hoؾs)Sw.,GEc$Vb $(kaH NJFkzm_r.1 ȋ䢥b %D2,N`ajͦqvޭ$c_,Xxʌ7O/ۻ>Ң/~Adz/kt(Y7e)؂¬xMZ1{8e21fT jaVBǨODgj vBQ& $*"yht#vh[IǾ Qco)\@\rRER*e)&*Zi`ˎo:)iUa+:i5Drk%G(>133ׂ*9QBUR9L"J$!JZ"*K,.IǾx(a[r8yYl0y3ApCk+U lnkxb2oV }eGUp#269h?'?Vx5ԭίzEhDezٙlslK&ϫ/[BS#~2?|Wg/A~=ylל#Ο;} |O{7^U|u"~?^N/jAʻ} vb%Uz@ok=f<~7z{򓵧j!bbRB ы첲@LD%& ENA̅5ac  VS!d' L*L#gR!N k5 g3W$[tC5ZhQ39{V76|]:maIҌ nAĮgBǁ;rʸМ-e)vr5O%CFQtHh]HZoBH2tHk~ux#rxF'Sh^#,U%S )(+d0^yKDs1i,4%vjAY%ID|_)<brcuWXM ׂ֝֙SK 4mzۃ6ߢwmĢw%7|;#5/:7CY H*JjF2*eK ,OJ:"Ғ楘o"|K׳ZFbm^[v9F{K2iH@06Rz-!+9mx㶗=}>U4YlegȗW]OJ^xl6*h[ h@d"BIEAz;ˈ$ \KNIB.ѠyQ(29 AAR IU5ɊQàHϮHg~h c*&WL?-W'Tˆ)Dt/,c{H($ud6(s+x2ͺA1LG^Oyie!$EZUCRu)ڠJ:e4=V-ݿ.|= x:έ[[ xxRgcd!@T<0n)B;>G^|݃Qs *u* :(4m@c Kqm!R k#mk&QBF#E^{] IM)KQ22.TPdcl:;0]W}k#'ۯ)עT徍)"1X0>bu,,7aaaaK@j;z rDh*0}UΌd/j IJz?{W6A3} ܇tc0A}FS;H`")YRDKH,|Sgq; pp]22|$Nj5Ʌs6L%U{F|,蒌T;oe=cuu=KT%$hJ昽)P,/Q*IIxN$ ?a0, `}^o[R ]4Hh%hpҝO/|c VJ%||5?\>w }36|L?O:N3sMq׆,+vbVic7%O Q*tm9VY3P{\V5﹝[@Rw| of>~.ZIWuQL-om* ^#֚o]7 :%'x%x!?n?,mE9Ywf_h:y-?,vqwnQcCvfT?FMo;,jqwԤHgofyoD ٞzlMW! ceáxC5d}q!hفR!1N {Zyơ[WF-fv!RL 'j*Xc_W?D MWRWyvK)yHz1N/:!: XcrKc#c8DTS 8("y(H?1C7N>罥BehuP@!T|xge@"&[07w!p@AL9)EƘu8]o헰vQm|_oEzص[ >@OY~}yugϼrcUb<+ሯ" zуXRzlF=Tc%|2gmᎺ(- 0l 5J-M9kxl)1cw~83z8c_< /? Zl5پ 牨斈vr.\.  hz{8}\p@lG'*a6 8 !h`G7ѝ0é2,O ?V ʅ(ITz"F;a=hMR[8:&LGpGmAYŜ` j"Ai8-5̻P]ͦ޽Uhs]޶돭X?˒m=nRbn/2+GzxDwH#r!9S0^Kqc6D8)9 VF;3r[teYeLe- 3`epdv;5c2(pQ@\ԶoY/&6ҥe{v} V\YW\1HO$YZ*lyӧzˤu%$T%e!T)!*GSͱ[7LN(+O;D! ai/ ʆ'J}9 t3 7MH>t/6hSvKW.k|/f{ZxǓ)W%ϕ)2JJ9R[Ir)':RE'y(r;O  08$EFjM1*i 9KI$:XN؎nc5ݢھqm-( 8CtB5gQ(gmRYI&DGc#X2 Sx8@O' ҐGK$IŤXHhD6%1%^>AL]y hP|9O.V2H,2NE^N#h ? |h˭{;:X}G оF})*-] \rB| F{0N jN(%\p@<9yPDXl5Zc8bW3oKHZܖQfz6m4n$\u=ĨWjVTхqc C-m+Ltv3rYQa}`dk|޺1'[^Mƍyڅ=vէvCΈmK~bWk4RZ#Qq@vVzU^':_uT>JU7s SHM5t>_51j~m7ox!=G3eAҮzZL.\Ogwu7KWW)fmh& *W$m+F{C;2oG~~Jz^M7K/g7 m:GV.[_ttGsT- 1ڴQiGLJ'Dfù"GzDR߱OI#'i2>$S"P\"c fJMz!<q'Ib] ()Odd8.i-cf\rcU^F51B-֜ jfIJOAJ-6OZ$) A96D7""("ZC Q)8Ef)Z(V P(JLh4Ņ(Ś#P{~tfɁCC+\^peg1,0i4 i\"@[a}Xk;wa}:k\uز:X]5giߵeٛƾ/5B!?¡WC"۶Bۢ놇ee,c;U -כQ~mMbFMo!+j]`1Pdހ|w??͛_4J{#,LQ:/ׯF8GwӛٲWٻ6r$tdQ 0s (8GI_%C- j]ݬ*֏zy6Z_zPvZl~½ktl"gx'lSv44th-#gc=޵{ ^gk/"Ai(霱ZؤڌNx| /FHXACxU*ȡQYDȨcAFmze{~WSZAmŧ+< عi}|[g3к L o1~!x\=JW30ik/EE!MnWMR*=! - 3Ѱw:EnyZ/5d%'_a6~{ hfuxRFTqMnot'ݝϝ}<[uRb9qnE:g/\Cf?@~ ѽJƪ<2kt=ْ ƒ j'G'6üc a}}=W=:Wwf0pp!D7"XKT,%RrE^1A(Vw`uVl9R"W( }5;#!IXEF>d/r'֦=3' \;(4N3rhC:qto#6Q]}Bj9 rkpɛ&|$R(%`,b2%1%m]Ґ㭘o5mYXLg7>59)gI9g$P9\q(u6ɣ1XȂ]r=V[N2 ^^Xa FcVLqFQkLRXGW"DBIE^:3˰$ \[^iI!h}.QEDd%z;Ib'6VT'AzvA:InTOW.L6w'2@H*R*!mqedQY 1"h0=0'z[OXzqgyMIaՔ->ADP1FgU)28ƃ&Bw{380p Ba/V{@V'}>w|Xw߹Z }j(8|CZFkaHhhȁTC\.=k>|SI'?YYR !|+63. x]AN 6PdE B=:I OBY  ̐Nm&JVJ9{daQH}J]-}Pk[U 3kojӍQqW1p3_>΢|6CخMrص9 YGQ.jhG0qB'M㗓v7ZVGa4/:J8d%OVJ?Z Z }PġVk`?Qb'*ƜԺޗvT"5+BKj`쯣ۘe $f! ].bw${]?frhɐn#*{y ־}s;i;u2jT4SB]MW*% jig_q04,wx^GOƈc=5qwUx̟F mzzϫW|ĵ__V;_DƇ\hJNJkwB%`wwU&X̮Z'sVSujͰ]+j*Ϡ_RW\tZ#z\pϥǷBcŭǑe㨵et=;ZG+K7K>P]ᙫXE+[Ŗ75쇛dƭwb~=hE>o$ EFt!x7ff[|>6:IN/F_}G֚y+97D޿籙]|a1Vǜ@ؔ0' Z@NB0D1=HEm$H~1Jk}s,TkLny!ڻ?Qp^{?:p>ʚt5Y?t}: gtA{ DTf jϟ PY,K6zmVqG 9Oϳo" $cb)@AJD!{~T| N%c'!%|u,&wN)e4h%_ܬg~98r‡Wr:Ox=Wl$%Ma[ \:uxց$ZwtiRDxW!Iĭ uPׇ<_W vgaܲbob<`Iے0I7ά-Eu|3o95|b:@5ٙBFjDΰM"j?\>7φjGQ:&c2Al`*#$[%_5`ѵM;LHeQ5-mjm6]**F+g mB}>x3gμXn%u)ҷ_(Mp0}^aFaϧxX 9Ч餙~?ҁ$( nSD$j$`yU=8o{?tyCO&lCF"MD-k}=+ E۬S3Zmu1^qۇ]*:7tHM\-՟yR|BYfr%a4Kޘщw( )A&o+lY;`)[big@I7)`%bt"qXJc '9>l:]M%z|f{ [pŀ,_YJa0' 7W#mpyBq'٨AY [hk "bAf3UU.dl(RP 59AIқ77Rf\8nXSmGЏkdn'ݿ:~|8\f^["ւRs:YiL Rq%ZW2hh+6bA1\&k}pmfwd@:HFQDϺ%De0Mt$8h5gCk-+^-t_WTZ#O r@_mV7xxۓU<J6:e' kXoĂYI],Y|A;SӍgyP=UϠy7LcDD l&gŠkxPDUD l62GDTs=֝NC kwk#E Qd-VhH 39GĀ$CmwRVXodv_ áCzQ5AdGRJ j)B3S jjF!)YR i,@2@ 59[] KV:?܈W݈fVxU٭4jt&|"u MkYCQ}=jA/|2O?n׍~;>/K 0geW+~=tF8k qrmIS SZmc>r7+YsM&|޹0$:[)-p[se7=Ʀy~ҼrӼFKHk[6e1ms˛&ZS+۷Nd_MTCϰ)Ħm#VrӋGE ЫcW򶤻q'k }ʫ&i5#ԃ<[B 5nM;=xOz[Sv*AG&)MMO}-JKf*f*N3L,dEG&!ZYIđ0>!eE vۮLݚ86gQǶZFD#bF"@[y`Yٱ16ɼv$c,gLbHNp 2eHdVjԜh,iFzi"-#bklШҫYx:[Ӓ-q l  q࢈ڲ RFoD:eQ%ܐ}d{#sRS{At}bW5yxȷ=@ؚiWF < 717c(Wя~Έyc im66޼>y L>GӦYuuXph O-,rXP JU`4I&l2Dv?DV.'+:I>gaDOy1`>.o&%餙G f;V7 |Z'x>yREJU3d~wng޽]u<1crJ嵪`“f2io#|ਬ s+dy\Fɠ(3ICN6o "|ofCZMڧķ%Vimeed&N,Oi1m7V !:-!}gN-ٹ3:7%JjO фd`Q(-":D鳆V=#\UY"{D.K`]Vl uB! dKmGB&ΆM; a;T7\Xߊ$ddt3$wvx}[Ԏ%ъX&r%7>]@ZȂKtLB Hh&z/auHZ Nm{64ף3P(Q2G0V&CTtpnHZ4PKUfVj'{|xgpHn6<a|_0Yf]*F L^WZPI(>נ|I=ijH Did OLI{.a34{tqex1La"LBg)]!fi* Q8@V4 jКNlF#SWv Rgս.Ϊc@);2[_R6뺋R׹4^!@` h}4( H|ĥcǰojO+qc4&\0,Du&HZbh %VLf2:ȃeG!Y2y-_184I1[ K%#Ky$Іk~:jh]7 k >gx8}[ OZJ]nqсؑ)3e|a< U7\ bQ9Pxפb+ӡIB0Kju&ZNSaY0) }Κ~E%u!X^jnȲK8H#'MaeЁY LZg."~]pId<܊XOK!H,dM@ZAzȎRdIq{c)H?S: $!EJH0Od.b#+Zu*{)D߉ɰzߑgUi̼26DncD”!뺇Xd|VjRn= X:mg"hZ9r(LHYf$.dRZqV(א IXczY`J-Fa[sAvajY."Ni_3aҠP].dhQ EU4t}L*ӉH.[q?ksZ gƵ\J{Ju J%0wJE?m8We`\f2TO*r|NQG}]e .Pl2_dy9sl4%W=#``Tc<\?O+|_U XΎ+g\B[cS1b,sXlns,H20CFBq2V'k! "th3(o˴'nmd<$ʒĩ*yxP)Df'uJe2\5qcipKXvX?\nëRdZ~i1ލE}96ʁ3%XU=.հ_vomW"~\OdCdIK;MdeV#'",]7[%DJXTv\ǾJ`{B { ڞx$3SᅠRZn]lx4Ϲ 7Wcr !`Qps3i06U3_:_󧓏~5\L)bg^F#l<΍ǟNsk&P#V\(jH!91hlcrgYSx]_ھ-_XaӵUt26`KS(z)MYvJ6,};au3"nՕ&}eR AA.{ƹ씧!kS1PI`2\`/{ǣ?ӟA'i6-<_[eG.+X SΡq5'Y'0䡆* \2mP howooU-Y$mȾ :l8 Uq\aR~>7~2Aɽ[Qɣ2|^WXPfPc\)/uCpiYm5fWMsdP$p+`3[5Xz@t@T{P]kK*KU[둛:ўqp{qXe/63q//|Q_xB嗎7DfܱbͿj0ٷӓ'cP\jAQR & eP\Zt@ћ1l1^]J`r}ކNq96]%`X#r;C&ZfJѥ}G ggzbkzmi }ncI />P5ت)P%0JBƒ/8KH868y ˢ$bJJεyj̃lkRS\~bfɞ~љ~/.~qÌdM8BOɓpTdѵ\@,3(\yŗsfǾv?.l`7 K~. n~|Gُ7H?s٤G)a ]I7)4&bJ%ǗL 4~){e`Fb 0(%p3Ԇtؽ8Ky ^,%e Yg0Buu Cazeud:^2}s^B)5&j2u46o2MTݝs}$'e%EUI 7ooG{b-'Jƻ.둿s.T ,Ȧ rpt)3'Oj!lӯ^6M]u}׃e[wIjbf0*[$(ZujpY Gq=FEE\kUH[U~hs#ՅW|!wʥ5AeͲ)I lߥ~i$%T[F.10J@WD*,s+I%ŐnHT-9<1SGN❐@̥4fZ pX\tY!bL6Ev02DuT͹T\1"+tWHj,%JE5`?_ϳ5'eG* ~:qE ewDwn ^S?l6|\d/mJL4!?e!r@%: [Z;|ckoj/힟R1}QZ)1|PX7% D*%VH H0elΤCǶ̕Ib,~7o%Tj(䳍R8S}B|pv~椦#fIWsfi7קs^ц%lCZz⸠WBG>bp3. Lo$WR`%A(]XJÒ6bdJW]o&uW5 Xp<|:ӥ<5Й:q*Dl/_8pwkߟ][wt ܗDσ j_޹-q!?&)}/:l]|*>|TiϱܩIᴮ׏,N:>åhz*o>䵟> nzW߫Oo >{JgH#v{$vK߸i9 NN;qYFﱿg҅!E JH.j!\S3s.O)Lkz]nG8ɉt _ׂe=?t?OO躻.H땐V!-5؋SAYת 4OhM+NbMXLij໔(_ u=XopX%DR/} LP\ʘ*VQnzl&cw)Qt.Xk z<~9at:}j= Oޟ mw+^pWW:- 9iNH%b1-^æǽP앫b)~Fb)~PzEvD Ҫ+oXbH:4) H*/<XiаD_m" ^mtaٞtzuϺY$^L=5t,5}Vn 'W1X +ALuF_joZUQ ]dJո UxKZN.}i0 x}sm\TxkSJ&b@[ K].+r?.~Xu_/[^wzA>;Iև܁]^?o: WZxOiI[Z?߽ŖNol4=ͻ_.s~-W? v\3LSo*t7`9$?_}ţ?~*RkpWe4KN~ y`yJ %*.Ц(d_Ie~o͈ٺEe~CCh{P<819ĶtE/Z?lD".y/hM1Eȡ$coɘlִ*Bd-z;!uPɳԇw?M gBגNO3հ\l%(6@ClTdfMBcQ'K) L !b)3ҿ +8U=lm-~)H7Rs?B}(Kބ"}\,5.]7C&)Z֥ Sl-Tn9ڽv5WncM 4 \[ؑHtLFC)6$p>fdcRz%`k:ܷhVy`|p R<1X|wxB5zƒDnB6yʜa2MW ےS¡táut)eQZ/0;wld en5bMEg-Y}GO {\R@?BsJT3mg/|Йyg/~. H Z~%)}C:`u;w}>\ߊpws {A4^_.zξ_<6a1 SJST Ierb.٘cZ$RjR=f}ȘĔj D 9K 7WcrHA^bn& &Æy2R߿˫O^e/v{;t?ԉ\_}PfzZQt~ 5ޙ.BٗduahjN'%Pueӥ\u0)H6Ė!$_8,67*f{Qbxt&רl'q/phn{?4:W,W Qt- o'}C0O(#-#4>eyď@zj@K,{ K [.SR&tiBu$~a0g)/^/[| ߃3!0&FXJȥ f8W לc`+aB[/Jsb7qz7\7Vi|uf⯗'paC -s`F .F,zd1a4[/>T-E[~R`6F6b?F @hfw]]o$7v+<3$//y `i7Qk%ٻc =-ꮑ8]0 iլ[8Eމ7r^{/ |WqfgSRI%P5Y$D^4r:$ (+,kjI㟞ƧN\=8Y'kv6XZtD[Jq%HP:CFXcM?Zު7}el/nI7S/kWX_S/ E*x̺uӺ li]Ա֍wf-SoQ=Elj@N6nf5]I3Ѥc&T~/UO]AyfmNN핼~a~x'5sdc^gXl1n?leCn뷷kO5t&ط7!+w?lajͷ}Qđ3?P$9qF ^մìS?^؝S?OSV\oo>M@eeO9 ՃOOz'wv~fg NtЬ|fu)wb/7;7ٟ{iE0󔠻&>wNanĈYeV乳"+mӅgXsB[49dmEKL*Bu2EJIeE+jj$ZCꏷg+o{^_|fK њ[h&p_,]&6"HHX-%)Ub,9g$!I{`n?n/W/~e|4=|룽?}N"*qU_>ޕ&N >ޅL:V4oEݿFD# o_PaZ^aUs%d?s>qw3B&=~cqLOn< z/^ {7y+ލԱµ-8ͫnr .\Xyck<[&u:bľ9p1: 損?<Нϧ?)h4ɑ@vM'T"%RTt#$ju$No3t` {{buq}AջoXe3F⥳R$[_'F"ZB&XGd5.]]p0\Ϻ.btT kt! Eg) X"¢m>`LufCxHxg9f|ZD+#֍kQDZ FR1JMS+:dܤZ~#E)_j9dJkm1+LL H*B@Lµ <ׄal8 ol]@PʦҷpXmͪJh9Ũ/36X齾cIL<.M!t*~6+ ChtJ C s މ\1Ј*XZrh>Z̎է\"Jؿ3F8`E#+6 Oڲ:$xyB )lȮ~z1'gcu.ϛ&UY{USΔhyf<%m`U 9$ׂN( L8}6:|8BԒ<5񤝦u1S$VEg5)WPV 3HX&U.耡=|!Ѡ͗Rnہ{NmT  hhXaEi E)V PT@t[ xT95?N#ac"? ]m++q3A<8Fl5E4e,q YfеPْag*)"/S x !l6)OWQC5,6D P,Yؗ]b-@P6ZTa-EHAHc y ƅUj+kleAf PQd6pNpaHVG"+"(UJ@NIDIc5 XDO5 !@;.bQՀTP?"W c2YoYA HcGwX,`i6%\"1X}1VsI1؊-GD5m,)95T& M;:Y{"4ݙX) J8 92jܤ`5poU;KQ@h? -r {%x $#)bu[*!RT7Bmy( :OBDBil,@%| $+aTQ]k?VҤV/%8M2<3npw@:[{ߋ ~ }ƬـD|1FlC*whpD9! GdSqyMC#W2Z.dE4 Z~{5WT4TJ[БJF`ѕ&r2Pq(v5O5:/(|"2H&jZ&#@.C9´Dx  BR{X1zhph'Q u@L /B9vIࡘ# cRQj6j`aPD pdI.;icQm \gh)jU(M.rP+HzcSjl6a,0L$馢}Lp]5,YJؖNPfẐ?HϐYtg&*hh0;֍Am.iފ{)ErQc-EJw+ aQO \ ܍t0_ q{i]6cȹ3vwo֫z5zsYs$2! !G7@7nl(:lFQQmt f# ;5rbY:+5fߵF ڴAΣFjK7#F@N9D{r9V XP`@HmX09$TiF3'ZY:a,z s.3BouŠ8mK]i \ \uD"QN~SCaLE9Ǩ6UQlw z!4~ *ڨdrǠP$ܤ4Ro kh*U|ݕU5!W0R95t:Ҩc砤f$>f$[BmEDU\@f0m@ ‡VrF@-P [̀z"re4ھ?MH( AOr|d8NCza7gc)rUjv;]*/KN1)4K@lRθ$!PN]X4(J? :ϵu]!x-<2B NRt"Հ/ li=ؼ &:i(Wp_Erۮ׿|mV@neQ8V>2)3kkue}o~!)?~Qyw_{qWCD9YiDWN# [ \,}>Nz΍^tl)O}S btjfǡ>e6;lC%S"ZYMDNۄ_}.Á['71ޘwu_ǝ'[z>Qu@lvq9weL!vol>*|x溜N?Ln>b;Oe7}[9ok9o:˯n y֟*6E`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XE`Q:XEϫt`P#p\wpYdrBFV qRv-)Fv28Ml/eV+0>~}LJK1/exĥψ!SAxqr5DcP!Н#Vw)>̜ycSk-UzpYOmΗUZ_U 7w63pZuczF}̞iz_7ݶ^g?\rLw }~o.y4\b}|"uO݉9׼{KO7~pqۼz|)`q,rO}5[0ZjwaY_ξ5a M9Ezm'5<١v2EKq~M*&875*6ЫW0EOtuO%N,z70j/jZ+&0pEODFeXQ  Npt`S9$G:6 .bWO9$[$hAPt69C;Vr$7Y9{#aӸ{Q&ȖVlU!& Ar %Gi4! Cmݵ_ :=۝CI"HK(Ƿ)SYz opmtA]۳Vo50Tx8GK 0NY}OӠ;+\?#tFD|/9?'{os]^R{AhoRf[$sVy6.u|1[agKvl?gn_ 2?)⶙mtm 7!`EB4:$r#yb$O䉑<1'FH#yb$O䉑<1'FH#yb$O䉑<1'FH#yb$O䉑<1'FH#yb$O䉑<1'FH#yb$O䉑<1?]FrN@'zq{ҽg=^ XϞ"|CsX–0t5m; jq<Ί/6M'3%'ZT!}{%g)c`faMxPeQvR*ڊ {R(}M^?kRVڴ].})v>r[VX:ԟC&?{@[ aupXp]\7G!;IdÆ:= 3SO'Eov0ɵ^3Xڼ-?ry'b߿G;/g+1 o59DqV;ƩαcKx0/ye^ٺܟe>cXEPk8#Vl5_D41GRx\ @NNNNNNNNNNNNNNNNNNNNNNN 1}Hj1\ `ރ9Ii}jav@#-bF ~`b:'0gL&D@Tŷ.q%u[o[ovG_/o s}G9ij]XxճA̿xW{WՋwK~|2}_TBvW!o ,чtj\/OfQ}m=c2 ytЌ ͭ{=7ÙzpE20}wAc+mTC(& d{eU Hw+Fm|H](~ϯwƦ@ D(CAËkN?H&<;P \:XhtB//%84 c?z6IRIKjw-w3~-ut4<ŘPU`s#dTSdwu3pۭE_ n 󗝏 1@nܢ}|3E X0Y$%-hy%n'/F]kY+tʉ9 ڂE"ѡb c@X.gЅ[Nr7G[=fi!' !`Sܠ,<߻q2 Ľ,DFBkZH!R;nt!pXIôX(^=>{{vvָ8 vta׃EPl;9:W߱Ȝ̎2Kc\-~+LLW4Jb/s-&X BIagT}pugA~t<kΞ=XzډA|sbК*7)^So5:?eIBpp,A &+C:w1#(^ZG>8R +_6ן3G2@F> ^([`ka9 &riSLN[[P)nޔ-ws1iyð׻.Cybߟ>^Nh TXsEdFɜ Q;WМqY$.".Ndn"DYn> S1 FZ=.<V -DF(s@NsGRvj?٩6Y&sS;ڮanݱv@մIkzOQ>X#dy8s^3y<Z IAJEA0L"~ x:;5=<6#q./ (ȝ?(i AVvksdFdyz<]7npS:Ye8rxy"} C۟֟'_]5:qжN]);qɞ>xT*C=r3ι`ֻxkpN٠,ATPޜlUa'3~u3^ aiw:%$8 D0I 99 u^)-,vGǿz4A4^K<=yq6 GŰVG9L{XK"ϳ#SQB4>^OdHw?i8= gLI*p{"EȃXfuϤ)$ Ѕ ãAdZ^c 𚧿@QbP&}-|zmn6W̓t0,! eIXmB}޴ˠlN1n2MHOt=*O'?kƑ"i)S(c<)ArjT2z(8DULiz;M6ΝΧ(K ľ_%ƿ/O~ίU~BY7./$"ې< q`pn||<.7,m4)qgxW=Wd?f&>UkZ&j3m9xӬѤe݋_`il #C@*xI6lPV 1|Mn>݂D2_M\oq.^8D\ڈnZtmE&A]]5m\\U㓩 (1l:Kԥ͕4oa Kߩk vtlMEDoH.~(y "0 A0758V2:[҅ZQNqXװ3K+ץmj{[PߒtGiwmV0 (1(3.Σ5m60CҊuihGVI* QmKIG-[kOJtW|4Ku6R=ȗ5@*g.vjmTlo{_kK"Z))&o!U`~T⳯4bwSf= G@(5 @f/mײַCM,ILXF)἞<Ͼ'{ev{N\ w_&KFD|/3ْy\[Zj|T".6_Rxj&gș=Y˗þ~*@Q}i{LvׁSzDabٜc͋mhvqC){z_k-o}F,NlJr6U)U_[o1 {6٥z|*"5~YSy#湎w2ˡ8t} 86K=dZrlMVJo&jw6wiq]\ 5^o=fף VMgjwp0TKqb~ɻ'ţc-S \X`Fuw3 rn{0H)0 bɌH][s7+,?nĸ_X:^R͞ 4"ndR!E{T6LɢHSDt53`?| 4vɁ*%+ݱ`ėY0BC+0}} ͵r΃x V0&eQ!_V|LBeW+k٢d$hƜ1JcH"rH]5DΣrc5c!~1_FP$I% $Kg.srNhBV`M *ұȖS&PI&2F╶:#ڌދj3q*yR 8ô<{(X96Ar'ys0y s& [ꃿg+ް󏥰CST޾N$-K(S@C εIH NJ"czmSKQTOԅRH\" ft Tkdl&vdlUaa3 }c,G,$oء8XRlnv~յNޝf9b.X4` 6ub&q2 em72A*:{ЈJ :Swg_ط!e"J"8'bVڱ/jƨ}d7D֧sUx 蒓”,@):ae سaC`%[Z5,i؋NeGgF!8 JEt}_K᯿NmxGJ9.zẚíCetGlڈc;Îw\_;xǽz}5Ĭ!“9! QQ 2NB[]@\W$fik@J cطu*AҚ@gb=.ߜ8\&oNƛkeKjr7<*hy^/^"1G@|+>mcWywΘԽo~i@d*hΒd;ܕٖL>9ٻ!:Ɲ#{]m܉Ad#Д\C0R@΢);kAbq%3:LRlJdQ3#(ol1T--B%SZ{.MXcsu߄Wꆷc=+%\y-Yg`_PeE]f>ZK,vW)M}YE5e^W0/#>LJ`-ܝ |f󋺣s>ۉŷNZ}ǝ|r;:Ҭͷ9E/%w % YŹp;cO ~s #GG>w./+nvn< iMSgvk8x8>`dh|<_^`Vg5Uh>; ~2 |aoEӯBT~2.OWk?ն" \_vZؿwB .&ʹĘãP"er2D#I-|HB$&+ Qѐݐ4nCJJ(nL_m]0Dj  TW̎!YTAIkdv46xVףj[ϯ82xRLe5+ZǤ *:(֚  {)F͠P7qgt>?;MtbM-6A }~z9qC%mgjz) ؏1ut^,h)]:ct5R㨯Yycma}nt[?Y`aiuP1KJޗXlD/XvxQ0ZAŇ)HmD:u,0CY4Jcv"dI%-J,0`a@:oyZ=5QU2A},N7YF2ʶ Gv@HIcΫ:8~aFlN%ނIu*!D@6%rMܽQ"2`6U4H6JI& 삑IFkm`"PL[m7Jk IG,X8#]1cI!TLj[/]ECD{.m|foOGuǚq<=WkbAG@G MNxXm@o&n<]$H.fa:cЇ%/DTiЗYS^;`gw;I-=@E]|`~h#bvUHG'QP!6^\5+) ydVn#l 0.Fv(Im I,!)pi`*IL|-:$;s}:Ma}}iڶ%ާXdW=,2Wg$2d;/u`J34B!5v-`>5 /d9͡tV[7uD~wbYC8[`Zi8CSm`Ph@i͈EI2ڣ|~s?(s>NGxs&z꺛=Ow:pɲa['\EJ{o Vh(Pj4.R?DG`x|fQs,9XiBd:Y#cO!(k%I1f<"!efh팙|:-!7w}k=v77A">-oزw38_(A)&e/A6bٺ %dY$jIz, QGc33^pZ']yBt9;Sʼnc! GNiaL>9vW/ED752I'0Χū4O?JM] Yd_ A*Sz3y |;$_i]Һ{vi=ZW"|_֯o1Xozc~o~W;5FܤФ~&&nQ}vjr*nb/+#LNG m{n%A~'/"p v|cSܪ]] }_:Y-Oojh &ܷ6dc\6TC[b)ٺ^'Jlmv@>Aw3w ix<9V]y.Q˷./Q[ۿR{ֆ->R5mQ_/Ծǿ/p7ʫwv^LO Kȹ͆e_ 09jH񕒵_m|`r;81q7fZHuFEoMWY2`6aQwl4iX}:NV]EzUf6|Y;=qJU.T*B4>OhJ×AGyRT]fE-P$GM pm8~3WH>/M}_7-k'&aMJ/8[u߮=D3S*&>kU۫1t2?{]ؙXJSCe͎#jv}Onp%3bo"-ѕe:t]7H.gDulquC^zQc:pr AwYi;oS6~,5uG+Z<rk}rԙ v >YW]d=ş@3= o?qp!DTq^V_W͊q}aEm5+e ;]S ERB[Di({^z4ї J85uS pZQXb㖞^%Q &zЌ̈́J[GNf/ehO]*K\8K|9{u١3< 1@i&5,0Į) JQ)1PQFywR.^n^r_8e ŮՐG)AUPat;qKQsNMs6Gɂ8Dz w:ynes^W_U$FZ[<չtz L$jd%'asa;aӣ6l%;9= b4 pg Qqez˩<& iwEѻMe?tS>R*Qϸ(bNY4uН{<䪱|zV黓bf y!G=4~?e?>WLRx$'B[V-[j-lJ @6񤖠|-'ؕQ:~,Ji,ݪdjKy PE)0b&L8ÕZtpcAK!I{% ]([m_%FnGd/$>u \l.TĨ\{6Nf>QvY4O <ZR) u9Pw?ЎC\oh~2ZUsG.wfV3!L VRQ5o8j~),<\h?ٯ]qgn j ~=A&şSsq&:`ﻵve+^Z([+\@hZǖ5{N ]l~js.?gu u~h⁌~\cm]+nݹv[-( VӜCvWu\u}/]nUh~r:^`1Pys:mϧYk|ի׿TUxzZREx^4VtԜIvy?wza,j[hW 5tm,LVr[!㑅 k?fZz_=^xH@dҎ4CI# 瞋H_˒D(!j NH]rgW_kkw>yl;/3~ D`StpG7Tl{ 1(ō3L I.=xk_&0'x`;(,LbL#N&U 4=rzDYi*(%4R-coabdlk ua[[w%_<# IfiQY6w~{AFlsM]M Q0A-"9ph MO,#$t\d{8f+JDl };blb‚*H$hSbkOq 'Y)jV V{@w`Rcx0LZDHSƂQQd A.jT9_H & CD6 8ĖڹXsSHȆ/|lkmah8XĻ'r ( ;:Ab cCJ{9qZ "R_3i(SD& }DREГfBs)*RyD.<٢BbŤdKvG,(C< fIcNriJ )zNcP{XL:=={ E9hXmE$!YqV}e?MYp iVܤ9?~ޣ5xbV}>x_ U#-F{dlE7%QR`g#i(_ŕ[mHdvԷ^9-_QGlG5zM@)c3(Np+Dp70sT68K䉋s%hɾفjM900J/,>:-:08%tUj^ `ff~jڪ}d=dv53LmpmڿB潱X6L("#9hV|Wmx)v/'=:B;c/RYR*AQ dhE}Zpk1 e$r9}Zx7wr%;ғskᬪΥ\ }[iyӅUk@Yʺe-vh@d; GBZ1? ;`:|#B OuRu"@EE :1鍘`]WOz XzI\VJD{E锨G#,Jrk)X ֺ)qڼyN:@) ;w}i>643uݶ 0P!V%[i*R^#][?^|om!Ylw= [\y,1OTo{N.K#(0\Ck+JC9ynXj)YR7g/A F9CQ[p5I!W{jDI^fuOd{κU~YdXd/j DHD)h E)c1U P@ ^BfQ% a-/gKuI!UI$% 5Ƅļ*e5=S>}/ݑ7sECƙg )Ui}*b"c֏*w3ߵPƙ ę0<)7#4Co8j~]6|:vyZuoG0Zh/kގ)A1n ZXްҥeޓ7z PVj^$hʼn bc1 Sd\17s9YP21C7wmmK v_rI}#IJcop(m$$Rm=՗-zg՘IDk1hFs+%eo5qrYDnTu&^fT]ͫn!76PCUZ\t ;-^ص{JPrv`j7uR7W^;+ j+y> ͞n*is9.+Rq5>EA< SSsYѦW);ÉOrKXPӄ%Fҿ- %v}5RcK$`Gb)B̥?{hyy4vBDE J v[_dJNsc{cWa5}Wk|pL\/'^uJUZ|i){_i'TӦ0E?|.+/NsvⳆD'u>C VZWR(?;ouuUϿ:3홦Jϯq,40ow9^UUVW3lU\55:*~x~ Nu;A=G`Pjdb,K"-) Xުze= F#[{"UlN 9g4IAITaR?+. 4-)&RCV~^tNi7]Dn.B$0q=*kAO,:6Gliik GTU ny$Mdvf= Zqz%)ʋ|?P|Usz0y?9e[Ե[֖ՊQfjHˠǜKO]˧3']˜t.seNI22[ &wH2S);&u=+Ե<pӤCZ&ĵ r B($W1pV{{THbaJ3jk-ϸg2B_>pG/ӯR1r]>C'zRb - ʿW>"g\ U`hi)1H)h3|Ńֵ}uDZl$quD"-uHPc &P$% o7K@NydcbL.DpCLDK P+xRkc5q'M ZݗjU8S8:HTOǏ?-=S)0Vhuh0uuAIXu$]Oh򶢼5z{ nM`EmutPB${7z0f~NF%Y ^0EI%3ȖXR_pȲL*W#˫ՈGjA[`qN_R0E; `* O*ˮ٭0g߶Byvem666d$6>{o h@lE0C> ,ϗ0 VA@^SG>P0b5+ֺRIdBHdL~ׁ(i !R;nyNP*R0mQZE #z!Uxd!R&R/5eաn-a$EVH[5_>BhrGݫZfW-XS̫zh\NΦec z<˹hNY4YIe"AXKaRR42 <~U 7hS$^1Pʌ ;6`# R*:dڦjKg="\?;X˛WS.~E,ȳČ(|@ 0``T;5sL aTs.^/4%J;p my e$," fs3"0A*k3"!6RC 6`}fYhYy JN|=k]F+e]kt)h8$0M;=. L![Kmu*A90yf<[TH8lSF8UJ%15[kWViC2)Fٌ݌#{qZd[ҜS~x &Ufً*mXߖX[И_h){75- ^Q9rQ|&)7uRw)>|{7y@$;up N~F?ϝ 3`OIaaL|Pԯ\ 3 JѸ4%tceK(||SqǍT>Nh*_5·@ՏMUH%Bϡ0 ӷUU']_~]|y,)dA-u:$_@0{QfpR<֢5޿N7XW:ߋ4z|ͷ;QۇUmRizz{%F}nAY)^2n]i$'MKOIx݃nOtrx]^3^z.mtv`u)=ouJ8 xۋ5f\oJvBW9˙'0h0ڜ: ŸG1 K<K+,;ZLy޺}ʡvuS7HTs-u.y&SI25T>SӷTe|3kLx}<&j\U\EAª;ϝv:s$D)ohozxrޮ?SO-c*Ac r|Zp*vPe(MB 0<Nܗ/Zښxʉ)S[!(XN^gXh2HNFf8L.;Z]C͍:jZ?tbXݛs.3-½UL-qQJL$DDbڧ(aӫ\qy6R .cFHcKE>2hSoG)f{ebD%QȆRKB1"!͑z2(:o;cƌLzz-#chnD x[ō&{b~}I_g ^\rPE-~Rs (G `kgR0 x hv2`;t?$x#}$YvU*$:XǏ#IOE+ V)KC!D*$X1bTrKfόiƸ1ؕ U\z.|HU l-Üj6ywq{OS~¯(p80<ξq΋ʥ&ITi$ exR\18a/,21]4̈gĞ>kK(0t&s 6 $I"A~ ӊ+QF4"* pDQ VBQR%{(nņ<4̈:2~McYɎlw yPxyc!EgBUظ!YO'ϹH} Sk%‡YǮ|HC>ݳ(^UvbiUaYя{~$oH!,\QҘOrRh;od6ѻ/𯫝UOSH BEd+dP .lCsb|`DSתk8e= U 4Ѧ{.*W;*q{]5#]-0VY"Tg^`+-K]j4c?xnSӋ#^Nw:h:55[gM2ci]4B lo9Y;lyplYr)&t$QMJr 2Fi!ӎseLã)1j ΆM[$E [Hk,x>rGjB0x-r%E;Ng&~k\kMdQmLF`hA =aKQiQj(7+]1~|ٹmC| 3%#SJAi4 I@U+JB\@ܶ20)g>vCypΚ@ɝ2FT2IHNv AEI,fY'6DB-MwItOZ#URӮx2kԦ$Є|ɈU9y[C2h>O*2j}c8J:geY7K6wɤV[XbS R"K #&ri=;_aڼPI+=5->NaPT\ eƳ-mY{Oh岛nڒFWy\=dhE[ZU |r'ٔᘼ4QvvlKU60)yH9&<ŮknE71vouhŽofs-'|6 7t]=0ra+8=uOBx.FUQy~ N^oɧ6=:Wcll^#0>a~Ђ|\K0%z.*dpDR葈vVu>W|VbhPg *Ɋr<'\njG۟;,ϺWl)J Uܰ`| WUYԉw^W-?ݢ޴ߙ'iM@ˢ^7#QWdV{hŧ|yyR,z}ٹbC i:CW2]ֈ0BJ0``:DW3pAu2&ׇD^J怼.f :CWcWUFh*4C+MZO6>υ:r0rq2(^UtuϦ8V"1|@xWϊxV?4X9wDI!f@ͱ. |{| :D0t+XWh:4(%iiZ Ά/ڋ7<Mr> XP olq 5@cb/YUFNn|h1oqC$(bzAd[S30?./UVtgmG4d*B6y/Gx8YĴ^ElJkAU +?kh/a1'LS?̱Ύ/kj[Ԛjivt(V6xJ}CR-6ݑjRvQrKj9Rt2xh=]!JILOWHW!Ctt2\]VUFyOWHW dCt WUtQRҕS!ʀEwUIW*5tw J Ct]et2Z!NWZ_OWDW֭ < s4ZmB7ϷB+L-`?zPfSe x:eF]Kem y|ɷGTLq]]1$(H9w%X#KcC*1Ʀz@J5D"W,R-,-#T't™21T &I΀EwN2\)"dj(y/Q kɹ3\֙Z?ԘQ~E!ҕ.UlXg 2ҙ^]evmXHCL93]mw;fnv_;fnRL]-tkգu2P xkˉ ]etQrSܴഗm!3t2hj;]e{:@J> XwG]e3 %c=] ]IgJWXuH]eQW-7mRОfCtYw3\`]VU^+YmZPU2L [m64Fu+4>,R0dmikbEVkkQsV TLx8qu׷wX7@F㛟,kUegIwKoKD,, <3Lף?lAue~?ɐ(6M, P1/#DX0$?YSLP.0Q8:YiUOeQq\/v: /G}Goy;C\o|oUoP&'c*孫e _/dW.\],վO,7;o⺽~ 53!U@` yyPiʃvBKgW8+Y%~%MPXi,C8I/2!&Yfka@7hdqq,<҂7U0k3l^&Ñ]3/ޘ,w0CcX\-;YNۮ>;X|4k_jf~>_xmgtNfOs%Wn/Z )>/Mɝ; ;92X?=~*@z,گ(YO)O'̹+hUdzvZ?Z(-8g3԰c*1cJWZ;Y}~~hP?\=w+;ۨףVj,t֧{,I qY~t<~WT9^*9ԉsgW_Fi_2A+:ΧȣcЁQO0e<ϞS2 ͆.,}Z1K[=p/a"SIbfEo؄SBtgVWW5nDRt'b'՝G~N)~ +awr-;grO|hGT` ݉%đRDJ"BT'~7o߽S=߽n)#RHwemJWX}041@RԢl1> (Y\]^$QUHdF B;:ژbJ[hY2ͩ+6`P6f0\0V[B` aY%XƋXU̻-g?^ZϾګ3uYT6'6P%DtI(&]SnabJwUbZKiQxUti }k(5ޡYH K"h̦w[~%~@]1Bu27?nA{jo/O"ol*{U=eQ;L E Z{qu{„AHRbs:VWڦgq1 P+{촟6 a#"v\ T|@k F@.蔵d? ɰRVd ^wź&HQ'B-DI6:K UZ4Vd19Wcn94c8]#UʕZFKY;Qޭ|ʖ콍mL"** I'Ejjh $Ju6FО\T̝r+*j[ÍRt9Ǜ_cZӼ5_['' kWŞ Tr὾\xu%:4a0r)i΋L.7S[\g̻ZE{ jT%@(Gf@g$cK &ll60fY;2j)H~ >„bz8|Pj"S,koQkXHݖ2*la78ζ0LaEn6d%2{20;˛.~,lp6]?; ٦A^Ph% OzS ʉ&mQj U,MC@J0*NT $kr-vt5/s,Vtkcg'=!] 9q)쵂ZT6=G_B&r*VGQVxtZ4)ar[SXeo؈V p%/ dB9 vݖ.IzƱXnqEL-b,dwbh" 2{?QlTVx>P VOskQ+ d-LS@KEh%gĜIE ϯZ~xwIbX)טbݤHhU_o]"&ȔL6.&ٕ(7T̊4[XZjD#pI~cݤX{;C}=< {8Ѵ-YȽw >nw"}2$)M0:*qwן;fДxsz#o|Pou >܉wΗ f?/҇gGw_Ͽ>p|;áwy_®k`,mHn{7#:᷒QŻ#7_Z("lAX+%mɐqY+&Xl'\f_(elI޹8\N{ȑt0h' FLp鈐s^K`%(Nzh >)[N`N wfD/MSM}'S[Q789Dߜh(V2S09N \X6e=NOYP{f]6 1X^?b`#Y6.1|$^גTﶜ9Ã5ѡ*/ QRU;_} X?`+RunX-ɌUÔeDƨVƢqF$%'DD=VZ;+/ߐuyZ3-n%لljjLrA eE`o]Ӏ~YQ,RZUP}u٤O}Kv`8W$j 5V[XFaըr'6tp5ML>J`H {ڪijFX!R)4)iٌe5*Ѷj$މ&#a=t,xCA"6X6)ӹrhaV]1zo0A;]b|6esѰsiuMa)%Z=._GK&L dHS0džF>6c:$qPn 8qLk/lip1_W_YjtR)T4j.QP!UHH)2U{5!б Ĥ+Z6SWؒ!KrH\Z )k4eЃ>o86W>sժ̏U!n/[V0m&lu:z#t4x2ttDWCzSO 5 ɞ*wo?aWwp~24 m< h^ ]íkNe,~Zng<=gN[Toʎd'/Lu>z@ڃ68ӕN U4>7Tv^]l jԡ Z5pԋ _ׄ&e t=]杁Mg;!7BZIGͽ,8)x7mr}/mjM{JPS߰}LN)s2b(W ۷VfpiT>OH )Yת 'Kğ>!Rl].Qy4[J NתT["$)`kajp{vtV廾G[?6mL?1ӷaIBwW拏ҎZb_?ZIh 2@80*wP ǝȋzb)nDR(Z@'N]A9Fcy`d͜U!J"TJF{ӷ|a dTA lT֤S1WC 6n-gҭXቅS֭;>eH] -rx G-1WsIy >&FS{cTW5Ӽc%T iPZ)ja'~r|$Wb T*r11=5̎*C L1E q /CkUU$a̅@[*`m*%m-Z-&ɑd|gxm9tXeWOy,l xfL‡Oݯ]"w.mq?0p ힺ9.e\~OF2zymp?zr;=>Sv2rzO4Rycny11k>DD=WxUs>7 ;WӃh/=BO/O>8| >57閵dn7#nnZ%Ԕ,:&NUQkѐtIpMGFUGl6N수̿LUpW_ :#DQx(:V/1>gB}]vV5 69aZsѪYn'ؚw&ݖlK:^_.ְvy!Ҭء&MTrN%1a \PuצU@~ ?CISFT+0&(ƚRR#KACeB4:*r.JLv_PǍN ]u[\2MHI|]]vVN8XP:Ejw]pBk`S tVhL*Gb‡H;r/NK}RJoTje'8A7>Έe韥B*={}1 RPbRJ(6(df* OYSXDd$[C|RHa*sqFU(U OS^ =ؘpZ0V[B &&o c[׬"bU3B출n7y-%"~t&42ۋ1gzlc墋p\R͑\NZs G"VJ}lWF$*bg>%!l#'1bĚjF֍:[YaԈ^ٯ/_DBKd[rhwо)jY 1A~WץF.ѧ{ށGF'RRyV^Xy+c. qCPee'eFr & U޻BT-ԬLo^:1p^쒶] OY͠?_.ڄ/C˖ ?}>זd=Q2^UdC6QkkAU8_:YX&+1aa] "WT-"sS ):wmW6F!ѹS1r&-ح8fHtL.x՗7l]pQL>@ܜp+wb?M?_b|X8;wV~lL~5<_ge)tb音_rdCigi yPr(֫o?'uΫGYB5aHZy:b2ZZ&2ݟ+0U\PbX 9h{@AU[,b+,iOMWؑ߬t<_#)[$Vq<}|,- %R,O3|S: d|Wu~7=KӖwgі^a x ?'gX}T7/7fx Oe{6YV*];De)/S!"ʹd02KpՃj<*/K+dZJŽF-J\Aosyπ-I"o\8j9#k$#k^hY$7!h7ϸͳMѕMwۼ0aVy;Pr4h)fXׯxPHehezt BW8^t<Ƅ XRQhKw<7ko}GnRg+$cl-Z.4P!VRJ}W<9a.ṽK^\\0JŨ4Td䦠9+M9V'*7˸]w_J|}Jڦ;Ag+ѣF9ZaLM 4\2cti}ԸQnJg#EӒg&B XNӅ[֩{ۥ+Kd摷aAhzϪj_(IdvG'-_̮ׯae[b d+f[}@*cA:^t^Wu5K?^. qOi5wǬߟ~?nҚhn Re- <8W\Z:;#Gw.¿P0wYEBxZ_zkOxJǛ{xHUeɋKէVF~t<|6&j>];/H!)mj'z˴' l5z>}.ϓ=^s,:gȓ[_M?!Ҭ<Τe<9RE 𛳲%,Y;N8Zfhk4㮐JZ&`0b&`LBi9Nfe=\8ǥ|f}nyi3L.]^|vႽcdNV ،Rd:M21f3Y<{[O6 f_]%[k5 x[aC.7L-jU1#ݽN.ǴSkE71TJZ-cx\oSũlzL7-@]1;;M0j##m9T鱑Ruֵ5T'|7?W`N֏OFˍ%.$4DLcjibG?':p|W_>o|f򓻾 |~%%AՁOAդc3n^rC5Aˆ,+ɍ nĠ c!}$(Ҵ:g"rݩ PƗ^4GUo9PWG+rḞ 7~%]QD[$~{=V $'7eMc"Nkr4ɯn|c.5ձvmjqf;215Aif2@$S *j-TCDS'Jk@Q6\\C+T)#+!YW  P >F\W Y@c]\CeD+T}qҒ+lH8kW(a x+TЦBۈ0.SÕͥz\ݫrZLOɳH.xQ￰ȡC/w]':b\RBmNq.?Cc QC1TrfT.HA/+(iT+b@RELZZ0V2=Yd2}X%WSwkJGm=e_&nN]}R\Q!MOtWm:P WT6y+W"MO!F+L%W(WPpj U2qC\1CO3P P.g ;PW=+42 \`!u0Bچ+P8WوJilHkW r Pf2jWR`q6 \`d`+T+B*.W\Py|A$ x QLo=ТܖA2L\+CPP%7=ĴFտG(v%h>ZpJ(!D!|2jQ7e ..wGsZp< ' ‹FtK'e ֏H cX&e"{*AB2PgrM0KVvפJ>@X@B 冃+\$J*#\^„`m;vjUG%TZϢWmzJ$y+`pr WTqC\1b ɺ`prp@B+TihUq)+ܹE;6 6W<W}ĕPP W `֮P޺WHD\Wj*BP2 W +TU(v*U\j##a& Jd2;P%W}ĕ\اQ2,%2t&hH+[F^n͸!MA h`PmWyکT2b$<& E=ی`MU)xUz';0 (D2^t(F!5wT ¢QCr"YHnr XtJ PޫUZqJpAt1y[ Gt7T;v*};ZJE\ݷ)ZۀpE9FzC JbB]Bhh>?b)+KE(B+T)tUqŅ0&$\q! W u}"ZՉV*W=ĕZz`EM0BB+TB6]WRx@BV+vjT,⪇Ru- g`&Vhq*e WFRwL(R Nf{>˭Ro'؄cU\iT[J`ZK %SrJodOT%0, 1QFF:i1RF2PR(WP?Tk/42gUB+l/j5 U6²G\+c1[W[u8vr>NȺjxv@oSM1y[ L++u(B+ᅈ!9ilQ`7D?l~ dXK:KoE9^$߼\Џ+ƞT^j5wb_(\Xz6[M?ɩ[%7]lb(O p:^\OWEw8楪 >Q[UAъo՟(AD3{xQ2I.T׀U pELC!9{d'(M*ir_xt<_1N~uW絚`:֮ Cz ^t_%٧2 S K%1@,QjhTZ@f} #,yRjeHZ[d-`d/JEl7W&M+h8BL+T+BG\WIpM䆃+T+P!W=ĕڨp ǺBF+P˨W2N{+IUsS\`k@4\Zf|Ѻ#ă3Sw\ @-%w\JqK\řLMRz(т aڽܚ1C`0r ӨVyoUJ"{iLu(ɩ̎O\'Nm:o2ꦺ.vSS! 5P CZbZ|7@e3'M4cYa P" jKPo "ݱ L޿`J:NxZݑI;ֳl/l}b 6@0ܺj'WPpjp\J߼w#+ƥ!qE:!N j;PoWGd2+e?{׶FdD4fy>L,f0ȫ-X5"i/'HҴ],UUhmb'3#NS<뷃WJ}0|6 {r9~hǗXC}z|ħ|r&m;|ǷR?~q w|uԮ!SG^F7+?ï UxĝxLJ3#PQC^HnQ?1^1-y</wgb}"B~@r9c?'ߚG. oo6b^5`}'%l(Xѵe)ڗD>Jz]>fΖ}]C2Yć_oO(/{{w ?k  \ \ތRI8J0i!ZOrKFn.E͝zVB2%ұ[R SsV3.R͚a¦i|S˓Ё5 }P knN-K ̷w䡊E;b̙0!f-DzϽ#B uali4Qh) mGylNPD6%Ku9$T,\mXS Y;[\\הR#%$B"sAh!ИU07T3F+PtRU=QKʇdCU,F:d %SR(U- рOBr>=7'!hΪe7-GpNTG)Y76E5ɍK6J |m֑Úݜ|OZ#vR=cF6o3dT&΄|`5 j}ITp,ֺ[7.m|i"%h,I{mgj#rsHF 9yR|؜-c6(8Yh>3(RAUvH'/ 30`zv5\S GU0oYΎ#D m3bk!!ƻAy@r *}rl|V2GH}1[ZmU&1:|ff .' ) >@(M&jZ#T^SehX Jb^=K`r!YF[[yQx+$@e6 *YȩՏoTG^ s mG7j%eD!v"}I__O7 oYEUv!ȕSе2"1wP: 9hPPG߅^R @aF 5< j?`])䌎 Y)֎ a:y@1 bu5;XfՌ.8Xx;:B|#t< 3`2Бxs6LE1YIbeҪ*>QZ|MC@;9f=`y8 衲BXkY!5J};QSThV]Ŀ?~XgHgYLggMU@[}A jfᭇCk*hhǗޚདྷE;&BZ&B퐄M٤$E}>>FJP0\qih}y>7u ,|˴y?r$;j x@^,4zV6 `F(xjq(XuuLk昴59%knĘv3@99˘jֳʌ4IMG> I'? 9k9j~Bg ڛ/`gMJTCۊJv0X4$SSAv =>ч [vƾ:Bď;bPDuN M蓻~#wt(䍡"UA  _0ۢ#ɡf$U#1t \:'X&W!Xڨbj#cR=ڂ ڊ-_tfzPkFQLzЙ dU Tk>?mAOϬu~sPqab{"j o9hŬuC OU!͚`)o"a2نf@2.O 4%尀d.نFp:qKކRZ7$Э?LE<` \t*Mj;?kK2h4&C,Z*cQ4wuԊ#) -(`Z@Bׄvu3B_]wԢ`|J{ R DŽ^/csl6VްP61IkoXMuh#VwGY$햳OhN?}2p(_ :#צju#`X-V9?Hզ֫Kh&xGzyټ_6`}tq mnd?n؞yMxOD~?Nݷ/Oǧ_qɩtrWq ӟn1bV=_:|!kCxW pAN '×s=-={5BL'obH'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q\'!⒜@\ q_a(C'KtN@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N" $'G'N9;w %:\>H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 r@:c`|;Ґ8^(qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8^賭!_W~<筦<~wsq~U7W'@j$H1.n1.EKQieĸK-wNIy\WXC\/ fOVSM0BTÂԃr'Ξ(V?OX]nvB5~{FYi}^[ 0qC~jm>oe>.6\ϑxxu^#@u?jsKxun*;oQOs]xğdͫ &FAo7gf Xև(+DB߶k?,~ 59iOt^;?O/.$εN)<1p^9ÐZCٗ5hǚgt(vd ,ilT?,kZ%%u o0uXui+hd2?&_0dׂf-g1Fd՝'o \U3]͇ʶ2%Fk]b*;<+Rrvߥ~=.x `Ψ++xgң Z)ۮ ʵ^]u]]#_2ϟ|~`; >sih_;Yn: %2.uOPWWWz^ΨbWL]]+됺*Ͻ-Չpϼ-ih͹9OCЫ`RwI]`źî \vU vuUP^]Gu%7[y"LgU+TWUA+mUAzg=+%.]; \d]QW:zuUP;TWZW7QuE v'2Hp-]QW-^]=+5m>UqzK5c\jDjҾm$Gi:Ygt+dWtAEtAieߡ.1`iߞ5OUTWN ;ALTLtij\)hjU-(;֥,XAgU;/h{B>ի+8)f#: 3: 8: >ӒPmO^8A]A\ T!uŕ|TWV𶫫mizuJ[; `TQWꊺ*hm*(A+*!u%4cg?$\[ JX[ݫ)fWWٷ; .Ψꪠwȵ"n!uULuEhgmWW5J;TW0]RWX,i(Eꪠ=iz"NTY(ANoQV ֙c@NjѲ+'Ϝ.v2_kN#s-X;ϵ'}/w:输[}@ iM|6lyw|6c$ЂTmޡfM.睵YHК .)\^]Ajj.jUnn~ӿ>|@6s6W~ bG!!kPT:qMfY$* $ Ry5$v~"o?M>Q6Eq@2r{ˏ_&wiu՝Ʊ_?nRR6ߏ# Rl3՟%XP14hMZ0;L)SR"yBvi(]Q?3Dfy/0E[h$UF˴poُ?sK%yApv3҉?HveD`)ea!)dOK4$y)&yHqr=Mo=M~iܲDR35uzs`MdΧ&>Ӣ%?; K#dafAaP:U#娙?UJ̇s52I/~|pRȓ/QxEݤePDAL`rz@BeYO'vJ\2_ih6Fz.zuZ!0Eqdzn߂G(b![Qyn8o}<18yc" y;(~AEk濍OMfi:O~`pC`=+&iF<}[&f=|E!_EtZMOBm'z"v4=;:;Z/nb:tLcO^'GNS=g3#Zd=p՗<\܍j+>1^M d믻'/{<  ^DA*˔>&E4($58 Yxf wS>0ώdih?ӍegQ̅,gpє O_B 5!d79@FETdʅRc:z &x&8\^?shdvÏFh{L.`z&֪EW;78g yJ&C2@LL5Ku~~ٺ~Ѣ~ѧofFiuS8=yPZQL$p#MJV1:Y`F:ڙ2Y%91%k>gLtD\S ^<[k?s;ז& wXR{n!p YBɻߕD 4\M$7(R/jb)-J6NY&2IS&9&V|J@c*/ d- q.ULĜ2n %Ѓ`efΑ3*r<%U[˩t&b\&bd%BS"*8+iAcEcl>MK0^ٝ[*sdP?x9Z3vz1oy/Ipr<њ/7А.D_-tڵJec f$&wX+t.4Qb?^]ryq4}Zѳ~?v,W)6qxpxSkJټdэ^SS3Q* 3P*Ӭ *!C]r y ҪճedYŮ']HW}8tt۞ii-^!%usB@Ϻ+T:OiRK%))SFk4*IOc%̕"Zk%9\T{en,)Ae٣~rڸzl SsQ=cu蘔XRەcR ZlIyǤXk{q4GIx4[+OVXb%mpVV&MrY+-?ڷVLhjhfɄL*rDs0RJq@1foO)0KEHNd "4DA@"J}fuYK=Gkmཡ`q^l ݦ}^_Xl!6}ͧ Yp z(?%G|Sʾ^dU8l,Q02QID/}UT.=|%z} K zUh|.ze1HgT2Z0IF gB$9SkMJ23M)zɅy\!d.r:ƃn^6`vb9M։;]\~zY8YMUemuf7:OU{pY]g! 3{>ӯŽA6R8u^ rx%_JW9tU)kRɊJV i=}IW?浒a2oywx[0;@ŋMo(]k%+\6Yإ۫]P-ѽ{7៫7[zL9LުڍNz| =e%E⾲LJ&'*/U30[B{Ͻ?D弇 V4&Ћ22h)3Nݖ1J㱬c4F2R ʁs㹒)B$wGcp'={x~WRG2F+|nO zk6߾f7xs[#dXogM٭V)OP_vCt$.`oGԎv_/r[b7]m᷽s&3opJvVgZ!}4pnjQNX+*阯PL?>E^~+FW̫z>(U $0HNl ZNwQ6rƼszĐ&mx5%ɼ="Zg{J敧_-3LGeOrᆬx`@/W7A,YID CV}ZVִVp X 0j֠f86kt7_x3iPǮ2Fv2_x캱^&*lB!Vnv3/61Too8S|9Xr2ְhʱk U4%2)LoVٗxԈP&!MOgp;n Z8T90JM9< Ųdf]uv|IA,~EYɳB Ѹg9Ou}o&SD,ig+}Bӂz/'Mޫۼz Ζ Vjy=ƴ?{ܶl J_nR(Vک&Nj~HyJ\S‡SJ$J 23gzqVXe8RbI@ߐp-D)K2K#^hAI['cwwuSA_Wpyį[Om˒ypʣDUqa(˄k-RȀR΄Fhl鵅TJ:F8#d[^q:b8b9H h 2(9K r΅Ƥ$2oE:tu p;w޼/ҷT|| \3wo݌(q\qm[KX!5 5&Rh@[=FZx!xz:H=|ṇ͝GEΤ7(#hgErV˜ syja,׭d?EA_?1s'˵, 2NEpxSDup1!j#:}C[_4u gu,(۳.m&T8%"YgN "'{pD8נ)7:gu'vBRNIGL~ӭta v8Mj =濪~MTbm׿"S a(q8FJYmU t1ͫOs`U.r[^.{83O!*~h<-5z׻r8}8JlOBGHSFsJbt9<{{Y]fa;W&Y)8 Ry 0ae1PPzlIcDl' {~I@Տϖynِ?X;]%׳Y?Ϟ"ceJ"^ZTɻFK#Bx6</foEip;6z8>%_G%}maq/7d~`48M_`Wo`l ?>u爲%ΗVKR8 hܙ^}<]=a.! +I4>*M6}8*txGPSRB;e]q˱MUכ0&`ˍ1Uc;9n4"!Ml|˽9Oޕ7a:90^!d$zv xKsS8)ym74VHL뼋+f>"rvɊvYX If)Z}5u6:܉ *Z()UrĊS(򄃍\z&'͊PPiP# J`o͉9R$N2 @:j-tW e 鬜uuj6۸2NZ_vc?%kġF]xj^kRޢ}s>ۙD;[nv,aSha<8#o~kJq|\-}rP"U68~Q!S#|n誟UC8M>TqOz'_-զ] DdAmńx8.O>k"+07;f3)IYG"Ry aV[W,{IUQ83Jpִ7'3^Xϰ}TȐ-%f`q49<-i IwR+4eswN0 1Fu$R.Ȑ~?ARyG0Ω <ЀFAa(\UQ&-Gᰓh%T9|68wx6׫7aR6;iZE+ڼ{Ve~_W%ENSЎ΃Sa<\'3,XqLTֻ!JN!$H)C(=}yl`+\O Ϻ? ֮3XoO !MяPYJ6Tz[w,kw*-&vJ/# ;(^AK7434"Q53#: (/,3*HpC dY.U-8 T\rJ1.5.+$W }[e$#1l.낳M?tӋ˥f[{pCN}ᓻ<<3juW}z*`*{Etz*ec}],CWc Z)!|μ!Xr8Rʨ\) #ؔAI' YnqqBv;!Tʉg6Qb+AIi)p0qi̅ќFqMWKF[«U7?N%BEbkU`%1<`翈f\ˎSW-&y[B}uEz$8w-zc <1n!݉qv i섖S4qvCD.1ξ5tp%o ]!Zt Qj;:Bb@ULk*ժ-t>UP6-`GWBWJ f2\BWM+DhGWHW(CJMj ]!\H[ Jx(i7wut%&ZDWpUi[*t( ttut Qrzh+ ”j~x+V'O+DZ$o# _ūU닀9;,ce'ͩ94/ؚ t F3Zьrv,}<,a<-; n*Vm\60bC̓X*E0؉ }{6M}?Y.yoJR0saZD:ca˔OU/&>w5J(T^ <乨zw: /?ZT!,A:!Ҕg:`Lnca7lؽsebP-E0a^2K%9WoY(@ZZF*R)<1K0s:ƒ8JKQW潬,af0٩$Ӈ6/heE,-%d.d<`A.gd@ JFc]f- WUF >G+ѕS"O ` gw˞X]VJRU^Bu^0d8䷓WWkW͵yfev/EcZn|mn/Zr zڬk`?a?,NNJ+F?Orɭ8Yt~OCɬ͉#^/ Lr+sx |UVtanGd*VUgbRթ`wv1&2qL Iɑ{$[TScKsstkNۄZģU5кKUy\)N4WNC4:4mȉub7=&7Nދ3w@ةn7ekWKu;;IJf텱́EŎo4 c`ڌБج-荗u=D_2ߙըUlm,jDƢ̄(XAt cQ+XcVe З*^+6zĞD3V坯wu JZۼבS_3Ќu%N0;vUzR[ew7;ö0r\J{#E/qFݦDY:P# RA5k_cDdݕ. y,k'~Z Z ӵ.HyÈ_R!<ƼQ4͎1&@(b`ةmGYYiy娆OLƒ'rm03LS⿽~}(F8;ĮZ'#܍z׍";m6/L;dIEX/@{kr7 >jjonW$~E61,Sf}_- lޏo+wP#V2R2#uyd OܟU˛֎/V1sVw7[5!OD=_]\;?NB5yS||rϫ>ز ?=EGPTׅвQEq#02&Y<&$ J@M~S*"dbxBGw=ŋJYV,o#p`sAqҔ[(UHm(Q2%@q %"{< [/lY&Q/[چR78")i`8מu D-gh=s֞_kkj#HƒIVpP!J )#J{b欉V9b2 8p\ & ©hS,:}J.@D}&Du{C k p_δTs?W-^ TOm8βx1,=SE.dbxIiBk#ZNQr{_nB-Sww=8czx"3 u~{@vj*f8eOi+⸍%&J2cfN%|+B9(,뻵ם~Hf9ulGU`qApy;en'KW+Tv>NjjyƑ[Y5q3IlvΑa)+Q]c&wo)y5$ / p%x{XtuMyzp|s.L NuYMj~|ڕ-RX Ή(\Զp ETq 5q: qvzKdӢY٥Q1:C/rgM#2' 儐)*hJ%qPN4$*N/YĂ@NqRl:p!&jib 7 >ipT!JNqRgI BPE |ܕIq[o֍ De:HxlnU 758޳CA}{5cUp,|l;pأ3#q:ipU$‡q=)ħNѲ0;Gbi܍]z)7_E I9sE?6_&9bN8)9Ӟ".DXPq`9DYS쉶6>μ4"/Ձ殩o< b.yi?~+Wn.8nٖm$)YHOq+gaQ;ou肏W7Aw0.Mf sw?PYE$3f*fry+e3^zYLK\e)7ڄw\(RJ04GC*TԄlLH3e @b}Rs(\@ `X#,;BESŵIԺf:PBZA=#:e6Q/vuFaeq7BWI:S D7DgAUЀ+o '$]ǃ2*{e= \PnDLwGZ@Z#ɐvu:jG]_zXopuӥlD- U %&qGFL9GDO,hnKZ`݅w}ڼD-~^7>rЮա=; LP/Q.&HMH.jCAj^(OS)XMMN|~ \^򅸠 Q2x4 Dv+ 1! <ӑG5Q'\i6Ïczfl $ r"8'ɹ@bmRLmBέ~+$1tOO4~)C4,ھ}Ç_irO~mx~2N6hW4,BVչyC{r֬Ѷaͮ*{oyM||ϻE{uͭϤK% yCE`V!ԥTr I ƫ]cE M.xp l!)F؟t3|2 *` )L3y`d,^*cܟ`G-KGOLr` J1 RX8<TZNY9i&&= p|_>pmNǥ&^ִ'|m5s>n^uT;kD2jxH,:\n#XVaBcm腷]*_9mޔvf?r2/unʻj9LN{YGcqہVY֬YIIOs_jkk5մm\4G.BZR4rL) :"ȵ]NNBPBK{;^̑3-{\rs{ Opܙm{s>6GԈ&6-w}Hw@DvC 7}3Mգ '8.w7`aTG[COpnGoK%h]_g~|_gy1S^c$X_"g/5z6ƥ) KJ#.F/ $*VF,7`tjSd|SpLNg5`MX.;[cɚFh;·-ngjW#xEa}%Ƭ *A!S2*(̀dS /BFԭ1HGƭFxI.RR̢P%OlIuEa nY_@$|MN X{nl)`zEQGVI):,'ZVoayEp%y€쥨C_k#%xvalo9mEW(.rcp&D:HKYn>pḄIPNh c#0`inrS$Э+ pʏzn=0,5r܉gC?/v;~7>V\MNE+c9H \Q`Jq/6pEM82XR"dEȼdKZt0|- UΉ@R@FW&P-i A Qż ZHڡkBGv]o7Wt 9``: 'cg-Ԑj_G '%'%OdDt'@dvLc< OQ}?=w\^#ޕi) Ƣ1*ܷCJ 9&kS"?Igz/kjFf7?P2tQwRN.ʷmNdGR ^tt)a?!Ifb\R -AJ KOڜ.Ed٠40~w_׋ꋬņW׵%?ʏFmztpn-6&BB(D{sP92h5H ּ|g%G?oNeAϧ6跂GDͯ xesɟ679>ui$6,dX;U]\ĿN(ITuILqDۛiA>bMI>׺Zøt4qʃVxd'hn癢-žhK?esJcX g+kI2HAF~d#sYӭ7])k{?GaA6ƌ^ ĵfB0Lb4F%'YЩ&Ŭ#nHIgZ Ǥ RKMe;ȹ]PѤ)7Eji\Vjg|حΛuw׬mU*u|A!I,`C` GT,'$W2 Ιh8SY\6cTHIYv)N0kc} 9IZ,Ȋ2YL,I%)Q5QZZwg܎Q(/lcDt="xsh<JL\^bو)izQ=Q@tWfAʄhJ2NɇW}߉ "e.IquJIA#;}=oOg=Qs؋Xޡ*lR+E*`,#u'9pIJ|δES<x@rə̃ "7N \ugZWl ' atYy7W7RW27ަo܅'ɺ4 JLz ߕF—O~jZc{gl9(X Bu`43&cYnY댜 JLgrIps{ׇ:czͯPrIaJ_Lk..ߞx+5Yҩ,w{Mα#KARYbtREC 7N1"fA,v*+̄(5m-)q!JeiՔ *>!HD%!gS=;10#/tڻQ*cLBoRa(gaT^g,Oy.oiE ?B㢟bu~FՌ5ScInp>{Y}v4> ퟮ @g؝m+,&к.,F)}ݝaǍV咡VlBF5T7J{ a,@AʊTeGmҖףu-#-?xժײz-V-f땮KJWs8<2ku5ҴJn4-xZ⭶g :(Ү>·K@{^'r"suYW?m 1`<$fRgIM&s2)a)Hةl4IYŋ%PvB!``]_t`=z׊*mhs *ÊINMR5E# :pOG@G_&CzX5}zzwлe]v;{Y7x4f=ZݬJh1Zl'Ej|˜޹y}l%hNO|%ddR:sla e!Z Z+J$ Ip!5"%P55JkPkD|HWLqجDIEFNԬ9Mk}HIY=3λw$8:]|$y|8:)bi/?]!5Bw&H)m,^xHcupLnc e',J% :pR.N%jE"g)&#uS %t 69TkWf{_k|']:ޗfzY9YOUS[*59v[]:+ ^.[^»W==;LW>嵝ii~E|w+\u̿fw.tt(z|Ke6㴹7 d%hݽMI o|CWJntC} ?.c~:)l-cJj=^ a$.WEw W&yO}>yc^y }n7%:T~U A&^{(Oo|{3{rb&::CQwI9)W,.WNWn%zs'Cuw^dJSr JD=0&P~F{EVGCۦ2IJOb2 VQ"IZCȘ[~4kvyʘ8ء6}RMtg ;2UKp|^>o6WVm{A )!`K ^,O5̵D䥥>bRR*Dd.ۗ0wY-ROoWlZ4q[rk~W7AFCbH6x7yi=9ўۓY~_<+E{sK1}j2{}E&$Hje:B蒇e/"*q0̲,h*6m bFD6N&$A!(ٔ!*F猆l7*ZBBELS(I V[c1Cl֜Ͳf)l6@Ofon 7u ]8?Z1vT$qJd-KԩdH6 3xŤyPjE_9Yu۔ɶ5 %ib(d*;Jȓdf>oY_e*2\^{k0 dtXX |eiN7W7+&O3zFB2Z&<ϋq))wAܙ4?BCвUGCg!~#݃KB lTD&3 aNd:GŸ,TtTQ vtHߎtqlֱ8OY۬^Wqwc_o,X\ͫRJ.pQ/d'@JxQbᵍu."X^CkS( @%8Y]' ʖ;|M4f_rNmc:Io͜|>}tV&k. w}r >c4dq1Ds ,4(xfE Em LgG8@ZKCoc0rfaүשŚ9+hA*֨"EqU\و/:>/,U}0erIyU)*!2\Y)H- R+ʒWqLr2mzN5]s (Ak^ȬeJ#$,|2Vj).r=Mins Ogy'yMa~ sſ~b,6hYlPea(ݻPgrH7:XkNϫQ]f}?~gп*N>}?yS/~GƳ_"bMCD_D[O c!:Fگ6hz\xWr\bϱn2E3Z`5]v4LU̝:?,sgܶ|< vmүn_jg0/X UϿC\dFD3徊ыc={6#80; |Pg+3軋Y7ҷjҴy-Zs矿n|Q}ȧo~ȗw[x{\;~0'O.)uȟޭ=AJŎmgl]VlңVNM/gc;>u&҅c"Լu&=[׀5In^i F!kګ9v qŵ\Mqܦ;^ȫ[}^/8t56- I𻫘k:h ;}nwYc0Y62$1;mLڈ.mwaƕZxt<ݼZ7:4(( HV9hP+tdBC$1pfS|~X?Z|y?;UZ`4 dG  T A(kzuom2ݶn66m[}ktzJ=Hh GB?;x=?׳t^=1p'^dwGom#Azԡs;D]1u\:J7?ZևFVmis!i,ȠJ I&k,fCP#z2:)\R[ F :; HZ#_-&윆h'Et>sI:%d4B*L&$J[ t'YL ȍ'jlrևrbZJStJm6NJma]ܔ QiY'P)CrPVbYE)Bkz_KJlEhz(C뱄022:tҚXZ#clGv\6B {8x6⎹zy+_ G1K3F8TDm\kҊٛ,,C$H) b؋NIiUaS;Ih k;(W2=Ѫ9;M'xEb9Kq[Ԇƨ j vC Z: e')(BrQ9}Nlh u>VLKVYgA)R<+82F1a1cH͚{Qx(l|CcD "91nvQ)K"dM(NLxXf4'ZD Ru p#kdfFoT:!JCdZ9V%OĔMcDl֜V+Cl6JnKcfx[\TفH E,`rx^-sBs꧌/L<˒}:~#DAv ]Nח]l$[tee]R{=6,}Got-*/o >`VWG}d1ߏ2@alTF\22(6oF<G˦ͦog%]';%}T]1N:ozu]TIF$Yr3hK ڂs.AwGwp~oc3 #)XkT$El>6JXKIǒg-ÐdpIɩn^_uB@{̎s2՘x SZٻ6rdW?bXT`qŀGHN2b)%YȖղ{ .uȪJyMhjI"Y vCX!ƢCud\ϳ8Gu/@_Q5NEUGԔݺ~j;P k^^BxUeW?Yt ؚ wv]0b;D`O/N~{q[ѷS)tugO`lXI!8%csX +ZU-Xq QbSL*SiJI{ "b٤ ,+,DFdglX.iءJRr^$&1]|EhZu7cXѮٖO,Ss J @s&'DA_*&Rg% 6^-̓eRk2XUhXX'cq:+ejgAF*^a/{< U8_PuљDlxtB 5.Jar _xH$0?6?i(91J%[661e#[,.B)3XEK = f(\,omNY2D|{o+ }ɰ?$ $QY T{zۏ{-SϋbysSȹ(Yk jr@,^AWRN^i'L\ dv)v gy\{skn89N,n;?YlA]h:FeΈ!\(tM0lb=GZ|1lu6͋v:yf߾(FR9Z[h V$ر.4d+HZd#ϐX FndcHPˢo1%#\R)b,$AZF)>u BSv>?ȵ]{5j]{%pqp^9MO;@kk3:::2-bo\|47ϧ.Aٷ!KAKaA(4={FuD s<;-WbtyZ84u/ζؽgH9uB:,h.yihȭ 朗Fp7OS%c鏮^ulj\l%,1*:J6)ҞHTޱӨIw >_9;ɕ~ڸCvpahe#+GS+k`jQʨTJ=x2 玩LxU%XUVJ%U+R W jXiMbL4"ɫ2*0`<8V؀E_try54j2|FoܨbHN7Bc\xi3tHKW9-]5x,ZRR>ZRhүPK[MŠ<;oղh}4&,_*K~7PWg&}69ϋӽygѼ}t@,55G ]\槌5-B٨}~zKԹKQPtL Nva_IfMΤw@'Iә0.treI"Z4^, mtb!hii;a;PO&SZX}fhPԐDL^oU#]`+h|J.ѱ.Ls01dѯw!$+U%OJ;8S컺TJ;[o߈ZwO"Y}`u$r]=Z+=zR,2l젮zZuۡM痏.tVْl >ﯳo]WE}ڇ4>-KȹM%Ć Qdd~yNv;]'k-\;vmFUU7`jxfQX Hx&38|?I:`X'̒  HKP:\dJLR [TWanu+'?K2iV"KmէɗVHe%L.dR!Q:7^lj?M.8REEaj؅ B΋9?ߡڬ붡{8C5|yU=N<Ǐ];ͤh~|qcywۊ= q@iy?NEhu#ܞ#+]r@,-GT %$i@Pj_MGQtӋ6VA'G6SЎR3&ɀ0<%2r괐"`c.)P*>YB)-R$} ' \H3r6Lp'ƖXO5ה5l_|{8Addص5-tn|.fSS}"f̚77ok߀$cZF^ \eHNnwF969m?.&g[OWެ& A b1R|-;C B!%Yih]λdH'L %֔Y)-VZGɥu1v0o*#z,.D銏WR+rx_rE'i} Arkk.c%%Y}afpTĊD" c96Z '9XdlK~:@^FMA[TӋ^8'8o}QPSH@*A@h ]g$0( {}<܊W 2-dgJ`bbޤ{R O7]P.Ԧo 02Pp_eK*h5+h]CltbS9z2&腐9I96&"eQڻ\r.2)y54a^d`YU{j %riT/ڣbm!JHe2^ta2$LhmFF96Ƃ=0|vA$GDʡS*)#c_GioO? ] 5.Pxf(0G Wv?~*;6)?9$F^9c^R6oM#zЫ}v0c蹷~:TqKe3˰4ԛU!'U<]jOEoFEn}_W{xu^Tsߍ|Azm~5L[2 (ڨGͯ4: jMR\~ȗN Z*QJ=|* "_RD;ਗe寎Vu?e (Q0 芿O tii imQjgԵI+u(upu3cZ>tL^[c0=&U5Ε"5F+"gUIkBI7~Tc~W'֐Yon|x7_Uhށ tP/go ny3E͐S95Ԡ ac(䝤w " P^Nѿu3aBE-є 0@KAQb mBcțX)E$5t@ eӢf="Aj)2]#yaNNzp_-ij7YhôTηʻlqgn3]wOcXt. S luuB:ukTgiNw^S4M~{?ۯy$J=(Gc7P9Y^|۳=ƟWc}ځW4jЋ &CH}'2cۭ,OvQ55,2ڧ$&Xh9 5c}ڧ=SKTYMI(Rgg=R9+#:T2u\t!e_T?@)Ey@&N/_Hz >r. ;pk@B# "gߊ>pҪ=o*gdK>żmf{0g%=ѻa(/gt& k4 Xge!6^ب`$=.u_iN9IKXskp"f ƌ21&E0T#ߐI "p7u~s5Ja{)6?]A_9Y N֡O޽}8?Kļ&4^fP ikP8ΥPzeYRڤe4Q4BbpbRC:ѭOS2VF͜:'aK1sF]0}`!$4>!"} !f^t*A#O@]m7x'x1UL3L>%m1ex(s60LPVNQr`eYPKn*qp4ms.&kv]Aa"BV>F-@rdp=Hgky2޴8-?Jz S~WEп}tN_Uޯ_?GUH_G?v}|g/FWs߇vA<džO?TDZ`ҏHnů'ECKLb/-Y9 FiƘ5 }Ӌ׳ ? KށhR4Qe6$=+K##ElܮwztYB6\;٢ ~odWC,_zѸ?U.fz %c-ELh( V mazhv<2+$mQ>$RU$Bb9[^sn[-WQgm7珰Fxhm-Ϯޞq:M30Ɨ8sN|Rx;bsbX)MX3%ڔm'/||DrsեޟgiLbMǣ`foSVz޺c%'9ѴNH`w'ᇹR ΅[& <{FDMl{^ 6.y#BMn!E-O}ܸz{(_cmm6k}\i"v n[Mӊݸ5Nk^}jޅ(jY 3piW3,b^\{" 5a[ Oik7f1>-0l)CssL>/qo'i1 7iק,/kwNÆs^sod#DIZޫJ;&1i?]X6a&s6qv?{>lr?xqv;NݽBDnT³g.2IPI%. &8W9+ AYPIA$ơ&2J u1uNc'v*vZ)Bx0?_Z5#i-'%1is69<'fp ]g-Z-^W80f=_jqm?IUo.>}*Tą'pRiېt/r]&y݇pѺB{kƥoLS&Bd"?As2nHqju7<RҩT\V&d#%Hx Hyͬ[Āht9p#&P\Bs eN Qy#f956i< ۑ5Ojm'q a&cG~$Uo6֚2 a$SC6ɞoiJ |Bd[ѡwcWN"Qy2$B ̢JeƭJ1Up&9˼]Vut4(GGRqϡ7:I 2gȷ5^ ls]}jڷz3(j?Zbs_ޞјJP d'a gn jE@LW 叐9*aU>Yc{ @6 k4hTҩFKtNhc6J;'dZc% 7rQJ,8i4X:p'w0.>vBZww".twxX=֎ۋ_Q&kL.Z!kuA{g4*Z8[{v WO%xFIP6>Jf?^ӧDq# GJ^\g ;9kI޴'4d+UɔVt=1eFUŤJND.K˄|T%_ρMB%A;rm0#r?0QH7" .%BhA=IbPc&ŹFTL1|鍕slӠqiPP5(2Ge Lhu\W:˹%j1 HUfWxU[*48,b?͖/Δl4I /urA)93XX<3znuqC pA%a  ˁ\2(b#|768Lm)Ub &KY' ĘCFlW!dY$DPQZ :azha*] l\gHz~;EtUxWa&K飌 1K'Pk('P*LTj?FvFip#mШTn|pQЅi.0#O>::F:7}cEhmY{,Du&L$YAZk&аX>rl$Gނ^E>LQL%Aj#B:|U}4ZYt}64T> \T<|qzmC]8vБcYY>%קab<`%oɰ,FM4h|Ѓi|[O𧜨qIyܬԤ~0}ӛ [+~Իb̀/F/=@ȓ\ ITvxihnz_Ӆȃ 悗#)O%c|k-D58y@&;p1 *yaL :r2IJc&||&y|t}B5΅y,Fyх)벘61V(cq4N Bn|e^; fb@o}O|>kf ~1"a{d^r-p}_~ͽ%TdDI.Z DYK0"n5>YvoW9< Jo(oHzȒ]Hn}ع7_ղy  a>~~[ٛO_\yV[`W{ul<̕onwylˤz4f̪ٳqcٻ6WMv~V_"@6wd~ |qOkII6Hd4Ɛm3:}DlongwwhU*[s2pEu*pUw*RJ>+eyW$0 v2pՊWEcsURWoW$0p2pUĕTH{WEJ#zpe^)p[}/us-妋8i_][xx利ѿ23Ojyjٜ~G5O1Zo,0}}57>ZiQ.*^O{Z Ke;#Gh6MkC4؝OgqHX4Gw7Vkt: /Cds4Z|_^O]$\6*gU~ 5BHguc]Mr YÑVeDXɰ"SaEZ&1 & S2~H`Nǵ\TH+TH)Wo, 'WE`-OdބipER \0p;>z|P K`#+~Wv-'y%~R\p\A:leYVw% r8[Wmy\|)Rj\ه鴰F7O{E\m@H տOtPn"xAА*`%Qt*I簇Կ<_l}(wӪJOCmB͟nq忻sm3ybŸ1ö~'ź5]?۵7%}Ļ\KO#dPpAs/3.}:|BF%+F&im\JY.ԐQy-\W3]fEQrǭe BTeP 1Z:k ? lzu9tB|nmqK!6byNw6 / sf&;s@_:_ƌEíAyAR@*̒3sQ]GWVٓ0VΫuU''RS7hHr`Wg?ɳfWy]ٻc'_o}ڒMfn+Mgb%aj9?ێ7jGّhdvdt`Zey' !DI4 LsmȄ g+>c5ǰMO1f/nlAׇOWrv8h4ӥ-pa r#R+ .R"Z~\0`p{YKkr`MRI-a \)}AWK?4ie_ cGz &.vI==Zta66fLo|>i69gMN*aAYVk2$cY.Y4*Sxf8@ZGC\Qż@LHNJI`'4gsDo0 O)t݅IY/SqqZ-ҥ'I" g,h(f}cO dLJd'}21h}5e(}ݕ  mj/RT۲҉e_E4Y~pn~!d=daVʬ|?gGgi<<y8l<ͳ4_ߗDq]Ϳ9/>Le2$osb0aw!6+[D ؜VdZAaYb?Ŀzb,!b\i,c2"M4g׿u9ݩ#l%M\E(tt{\Cg]Cp Olenu)w[u'3uL|ݰ|aq5gލq2"|,j׎нSkGPbtF#w5r1}XywH;QkH}|elڜ}~O&Ȇ8-qoCKL\(Gb]8Ϋb,.ҧH9t|[7L̂őEU.Am<:6]l9ǻ1ԝxdt ھ Fݔ/Ÿ.tkwl{uEws鍊۝36o<$ /Њm/2B'8̑|q d11I0ZW_1Cx1C^Uxَ>hϜ ̞l3N#!b+h"& 9J=ƄEL2J8@ yHWחDjkび^MB.C%پ)@yNYXuӏ 4?iȕ~\icQPqN@dLqV\{*99J9`qNzi''?mn>"oa#57]*(sfl8B*;_ f3\IsÄ"p2H#Rmc%l BbYآzSY/",,@Dr$v ZtTLvL*1JB Ke,Ag*pYbt!s Fkjl'FQ(6xp#1߲9tӛ@}^p9]{s9qIF&d {s<{^~.?(<}BaYZuD&bZCE򴀹I2c4i@!WP!IOJbPQ*zcʊZM튺H̉IS7/1C_Wmz<_/guPY}|ߢҚjsR$OLI;R]kL**owױ쓀5 2bř;)A==m Mcqz]D3)j(2Tmd&vdUaaq(Xh,$ I,k#CvlRF0MLFm4,cIIL h$/ (|$R&4-ȒgYCEG Ux묶JŮd5\r! Q:2dj8KQ\@g;pO|Ks򖳭Crzxs_XzҾ@[bSfsSnIyH`%VIW_*y}zhgleKZ%OMINo ~ 1# 'ǤtL$q)(]g&[Sk̙X*H(ЎP2x]"^  & 7* d->uRa+c(躌 W6HWc"3|>njUs=" lVIơ-f~1&vKt+5g㳴[X֮Mn]^f}J\ meWc;߷8/d44VC3CxAv89ғ/%Iқbr2-_VTAV)ĐR !XcC6!caU 1bMy?rH"de֜tRɪXր/ŕH4TdbϓY>Nfdvܾsmkxs\#zSG]bL8xC!8j:4ULfN};Q_pv8sŷQqn#'p =bkUeֈ'-?*rrYGVv57+ϡ?="%!D. NXRu2ʿ7YjyeU #o7k{|\K/ܼf b}{0`u_N/ֲt{/LAW&e!OAdi6?{KY݂2 {?zMԇg1j ɬչ*^3q#l}L2rq=.8xNo3h'@. 7 .9.}Û??K:=m ܘO{_,&߿E>K8UH>/r*ݯ-dzO8ЪGܕ0_B-Sf$!:++p־77F_^/^. W`nMF$ 廛VU}{ۅ’\sÒ.z5lqo~ٿzjV/5NZ0ᗟg['QgUho&mYLq/͖__SOGD4hueqJ)gxHGkWֱ3m!׮Ս 5'e/ًҏaɾ"Hq=%vʙS/2T.Jev|L<.]? 2?j6z OMbn~;ywrd x79_dl<=qm;)wƷ,PMZQGɎ{|ONMNDJND7Y eWCUj& P)o}RxiY: ԙ<\iM!i%C%p-Wj aDާbtG9ꃫC)w{;pc=!Sȇ&B W<5ɾ>MT:Π;(eLjUf2/ɼ7/ȼ0/#e.r$Y)ת*jXLJYؔ}PbJ[KռtfmT3-%G:bd43f6LѺleopv<۾< -3g3j[/%[<݇ oΟ'fmkVZwzֺ[2'wtGќPZvrUp6_7_2-o[ew _Y˛q)'y- ֢鮉n6O韟acRlM|sCpL]-oݖ{Xc'pRsŃI-qUug 8lTfgY[mV8'.qrR Z񾐶)E<8^s0 v*!ZhSF0pF-EJevopvW)T?/OQCk%皼dd އk X7Zo$@w3m)0w+eOZkI8pGI>Ύ0UUTg4.yV&ݷZ=#S)֝ɷٔ *:Yf-t2eRi`/0Mi,/ڍr~P=Ux/ [D="j_9\?jh=̽]5ģ׎Z?x몡dҕec74Նdd>Xr7by=a;0% 5 ȯob4h_Pkkھ2NAWo`3 .ٱtCiiڑgwIW/ iO?cIH~<ɯd9Yf'M˛GeFƍvO_uLK@ޞdԀßavy9;w)_e|v1o?N4i2ɻz:k,3!D. NeAN v>>VKͽUR#]ەC]\2c]ZoJL||b>]5GCW DZЕu`NW .DWc+q >|d^6tu/6JeEW|≮zDOJW5\c ЎGMt$tmn zP8jp ]5J;s+A`FCW UCnt%(DWϐ2NɺjGDW 3(h=UCngHWvDtE)GCW cj(DWϑbyDtĮpy,t+AAMtY~O5WN\oڷ{F`D4x``B -iTUiڑN>ϙUUg4ve/h :v&8)`5 َI 8XM;qP:;iU)VxPGMJ[e㗳OGǭDd![V-w&lͯw5_9$>|Lɏe]osg;8=Afk ӭk&Z0DGSŗ-OW;{;$E uQ_l˘0ftw+Air6J4,*(T À]|gD˪"rR~&ŀ[B!bw:=I}8{W.>Tv!J5YtBEʣ$@h8٢_sɸPys+l*[YR(Q%)`نCVH2Ơww}RȾO>LTetˆ";C&”EWe RmhY\KRO},'{ՋI$chT\`v0ftY"!o b:X%DCAqEKXEo0L1 8їQ%1XoX6”}5.n1P&f%VVm!|B{5(U>bY][aЮW1b`b$V*hsJyh1U#c޵qdٿBz?3, HeZ'ށKJ)w"y' [ӬӍB'lX,čd Hqah VCFzzxK05nƬ(SY єƭYnXkYЭ*T*JvtZ2ɷ LIr, R z*V-tC*T4`Ԭ4 3*0 ؄FO/;Q8&1`AuJ+jKOpYV Zq2M! @C9&X !@YQP6{iFn3%!2|kʃ AŚQ kG !.A f|) ̚` .Qkq(9nL:v_ #*A65T gB@QŁ.єA)x3AUѾ`$Wm (ԩ0^iqW.{/AZqޣ.RVKt/Ԟ4V^n!GAU3_Mr^KYJDjL ]V+Q"1^ BtzWZE5(tXgGwc}lY [Q̨KQE+fD4' G1͋BIJ!pB6}F1 > n0P7u[w1 TA׮z|ou/&>Hh"X + o b9PT8x0olJ` +]LAG=$]I4Tr2P <90_fTuᜁ=h5'HiD.0iՠU YP0ac`^BdYHև\d*c%VXB(;KS9H!/F90hT—%@_>1&#TϺ_I@'SaʠvPJ%l-ѧGk .IҎe>K^Bk=֐h юmB[vhXxhU 2Б5;knP:LMF"Biv%&H?<(B;썃#jWxV|, ʰ" njrMβ]mDUB7Bz?XIWHGY g#Mk4ZIPYI$e6RSWoU4^p^`!-A6H*tM}AOAK/lmRkh\> cs|y<_tY] C״uq#g,լ n=CnNN44zQ63 {$;-,fgm5Eߵ%'ՓFho1&P. #><62#baQC^:$5\ ty݈XD{=D+Y,w]Щz Cj0Fѥ-HOOdPJ;HX 7n#3l]` V&⤩Mk>)H9,Zy/iLP(EcD&7cȌbwXp\:&XW(]IڈJ5(ZS{N낕9H 5j׃*M3| R5Τyha3JBv ts:-ǽz*Dz9AMśւ LW YcT>i1‡  G]` 0h kaF3_LO34%0A‰rT0jzm(6 qLUہר NC`)1cݨ5jEr7˥!D,0r(;fAzGH28Dn ZV Kk|\~B:C߮hT읩y0B.8! R Gۭ^u_͋zp7,UM tѱ F{a9}{sǟBAq 5>c,T7/7J٢%4vvI9W˯gS Um`B0wǸ}XqWR eh~N m+\ܬbEGZW{ۖW~rrMsMBk49nz=a-7c[eRK# ФG'\o&|J8NcȊ q@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 &YRÄ dd@@'{5I3JU(I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$&qRI gظ$<$A'0 %$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nql@ %:7$%"NzPIsLi-$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq|@n?ޫg~t֛ms.v,nןqvJ%mPp pLp hu<Sw!t}Z:pOIW…}B:qO@WGtP}*]ݽ1So,] 'CW ]?:tE(e:CR'DWtu'LWgHWЕc S+BOB3+20lɛ̵+œ:]Jf銶)+k1OVʓ+Biҕ^F3!`'2v,"v2tEhgP'yI~Ͳ &MLq6GL:ʾϚ <ˏOi"EH,퐤p%(DwzBx6 b2ЪD(cv-8&Q+kTz}tE(9UtWl? 3"r2tEh:u"F3]}1t%?qp0'Gtu qhx:EWLWz鑯+VRMS+Bk̩ Jy])t1"N:] ΐ^Lp+ b2JtE(s+DpS+O8MmTNW6ΐlW+즣nZ+ܩT A[ 5!"L ]NR f,*:IF6,LI5z(u}~-'?aQ#?8/'Mدj:Š΃I 1>(kQ3-nBjG15C'ǡ}'ұ9G5l”ExD"FN=k593+z2]pӡ+&sКgD(m`bJ}ԫ OOWGO=}['PZDAWS Np+O;NW3+-S+b:>uF8V:]ʠΐZO b2tEpd ZNituteVL8J9"z2f:ytE(_:GrO+TІN(#N-R.4" QMX\#FM Nӄ3i tdts'Fh6'f?rLFaTvL}e:ǃ1NN<z<ZF۵-hCG7o*S\/y[|~a+p^ޤuyuA/˗uOߗWnWRei^ >`)6`-Wg"oHn^Ͽկoo?~GBzouS/nͬ0*>|/q4VG#mDOQso'ǯnk ́v)˫aq}*JQG D@HRmw7!4]A 2];WMzG e VmEy35X5ի,ǚExUv%꒏l,U^[ Aob8*U% flv|eWe~,W5jՖVЁ7wD]Oo_\?PvS.7gkz۴~;o;2!*8ӊIJ,5(wZ>a8:Z.mԚTJTm|BpG* XUl•&!z*RUS3t$r"{J=VSU:-^W!;5֛r]!GHCJkE6"h -guJA/9MPI;5le-OU/zmԫj{dT?{WFk/,f.*ƚ(#)v2-$[ՒlӶtDfdyUEwF0Lev.ܳ ɽy˫S8}۾ۣH￿õaCucqNhNxiLS2!LD)x|:8hy6f8YήY_꾱2ZUL*:_,.$gw*9+_l^*Z盵$o_56?\5MԲi.a<-,etg'd꟥ssPG=́a6P ȘɞȱQCㅈ!E<S̀̀̀ZA|f!$m6L)a75wz#rZ F|޴QϴqM{u:lK%f8A.,}V& Zu=AHb sĿm^.s4ZۮA5LPl:@ԎeERG-=z!H%g/pz4:G s ]zaUv(GYQa׃[+}g tm.=z)ԳlXדnj5¼9EZsẙ0 r9NI 61|H2Gc$MĻ:Mz 0QV{IA[c3D Tv9s!_vhR^jue? o?-P}7/ % >zB껙/4~.V.)F+6l/c2 s^dzxDsK{*xiN^F 9@΃*8'Fk$! p/1.b!X7ɠE FgᘋL!*.#J!x/(jNVɥn~ r}k;'&& ^LB|v׺} 7w76_০{0f2{*||ӋE[gܰ'յ+|>bQ\W]jg$}\0^'htCͲ;'^Vƣ7 dC%h=}{‡+_>QFa6o}u nj$}@ū;ϰ=qCsucwWKe2'瞟z??_iB7[_{6NR;?m.g}1k%1A6Q΄kc|6vWDtQ9]-OOSy'xBbF{42LB,XP*XLZZ $eh1܃RhF2FB")!&azJ:q6=MPnz&MiT푈jz9iѵr>Ÿt-v8/s9w1x=rg-UGir\c[&.@2yW-]bFִDj{o9@zePsB[qW;mo^xgOo'! G2zMSQ>jꠒ7pN6*E2}&zwG\<71jBeTLZCd4?T2:H4j%v(CtU麭N !=0=ig6ȼ8b`{N}f h~@zlE>/}fJw VA@>_m^MfϯjhXܲTM.5ɲ ٧JݲzSuB$z:‡[t`B-X:: s6(wUP"J0FIJgYrC`Tـވ';`Ķ\8GebHlńF+b6\ qmc*v®HЄ/e سBq>;#p .r1'ufgr-&DIR5̂+渧'% qL=J*8:|cvI܁[+NFT9PZ;qp,Y먉kv~NAvsP7K1KEH`#ثfVUd3+ůf=f3)%¦ vI&5|yă decjHQHZl2:蓖}f/s.l=n{fϐd {0`bf1RJBʤŒQ2!1b2{ݛ.Jd#x{-.>Z"ijQ7ݹʊ.;|fC.Q59J BZZs)^YC.(m h{$0>%z(`c2@3<rV&͜:GOÖsf8O"^tϰ;\9!t~DC#kTp΀nJHjS6APZ%li2{m1e{z-P@.$WK8a辵"G>ɱ>ެ1cnC9{, :$Խ@Qmf7\1;^Jx/xUnta[q:n^+͹Sw.z093FxBK:YO}کε-Ӎix0`3|xɇ4$Ogp6ץ#yٷǽxΘfJ0F3>ac"9e⢸g]_%?/^G?y640Zgia-˫*5qu<64Ҹx52m`:EXzO/_=18Z0׉{FBu»yy呷*:p ?M/øߤ ~3y3ϐmf}][]}.e">VWmg?wkzTzwu+ Zޯx4/?度K{]jOAˍ70(؁ A(;xlZZ"~u_6dcqk86o ~ R=Gw{!}l9VO*=_+ty"x :fa~_?|BƑ.t{as_ohLFtVsHi]r'nz썃, >HtT_WFeɻxϗdqvdu;u -TQönjL_x7 \ Lp2ϏA`AtGơ&1J  :i I|*mUg&M#Z2 c& Ypy]J+#0_.:}Ɩm B邸ƉjO".rmRY 8&!#]p2K"'YXub3'I]`'T&RK7QCTyVD]438})j䵳H;FRZF%`1ژ\4.'TH %%LGZu} Ȝ8Ȓ R=\rBQ@@ E0V!sX3*taP]h+B'X|)ȸ̹xoJ0OƣkmJS.f31R((7ƔiRQ #,ƺ,n(d/`2F!EQdT41P79afɺGƎ lP̓g5:]ez#}#^[(ޛd8SX&<:6ESRT$w(1;̶>ؐӜfZX\N-@@A#( DȚ$-q6N_M%k/OE#VՈFF5ӿ DD M'{gǑ࿲R]E@9F.)6IL*$eS5$)k.l<]5R*l m͐p92 ܀6VBkK86l4߈=qo8T~#{nxN:Kj1޼ŹxtU4`:&tmSQ>Zݶ%Ջb.>[ևz> ̦Fq"Kp5.X"D+qJxxcǶ&ǵzR} 7l ]x\F]xy]̨62fHERo $UH8(@(AǠtjp> i2bI^:o)ЂU֓3*z @_ķ֮qpw4r <|j#̗&Zukt8GALWORUpu*ejP/?kӯog`*%]D_e>5=L k);[}cڸCY'# ЙvpUֻB{Z c\O uTk}WNtpq=^~T]јS(ε;evOZl?LI&u\2zF/fX{ugڰ[jnswwPgp{S9<F!ن: udS)wa#LFfOY~ SO<*lc"oD-`Q7DVA0%ja1I A)icTos:zu8 ,zR gަjc֞t +Κzo]w rw/~e1N< (IohGx|Y:r+n8v(IBbϳo֍ʓweUvNz0d\||j~n~j~~gU g2^>FI`z,,0W86Ζ7ggO_Hk݀ 7n^ 'vrTQ}vٍ9tHe2EL[uA޳'D.Eg_N* @ܿ$`LhB~[?h1 qiZZ@&&lZ:?xvmJtm>h靯l5($ԅ0(޸~qDA;9(?ju$tl[ҝ#)JJz?u {w {rQNڈq߇ g]o)o}(*uVUȏ$x\I(>nc.Y2m⓹GnT&s'J@Ck^߷]6\_SdT;4kZ|cBT3}[٘~_弯lQ=Nۗw(.A?=%u[lBgn&vIxr_Z4j; 1W}R]۾fя{ynuRD:CEK4l+K}?7I9tMv G}*)iRYLX|\R棄磄2:5(c91銁6Jp-+_WBPu5C]YN+ѕ] 7J( ػbt%JhI宫f+mmAb`|1\MJh]WBV]GW<t%X!&gWB j@kt GO⫮ܬ^,̈́+|u4r ?)=oqVͿc_X<m}P~{NY]3;M .R4ʹ״PS5=CM?7싛W/ctw9QU,#Ǣe~UU<}y,z2݄~G?gJU};V|y[ղyf+Юc[ɯo!Рkml%P}gBg8T(hTe`wNDՄ%銓+e] .`)ZsוP]u5C]Y TA)ՈKJhmɠPlA#dWҕb+%rD̪YBtdѕzS|r#֪ʃ&+J=+EU] uw] %9Hl Fۅ2/ pR@䦓ͯ[]`4PNTɸѴ:]BL 5>IM.4ԓta6fSjo5hZ ֠r*bMidRV*ԤVkc=Q .Y)}”%k2܅+\,f ӂ~5PUذùrIM>־i7d\N\ѓ]m^;+HWM1Jh+DUW3ԕqGսվtŸNRt%F+j^Jѕ/EWLZ+4jڗ 0]1qN+>w] qUW3ԕ+TL6w] e'C%銁ɏyv%>+ j"Bm3Px@OmcM۬ nkg]y 4hthIb 5$?B{( @+>qq] m~&L(΄QWC/iѕb+=ՙ(>Zrt6=@ǭ&纚 NpZ]M '4J2NV]m  ] p(GW] DirtRuu]J;UW] >ܒʢW ҕ] .]1>to]WBik28G]b(JWB1\RtŴ{3ՕPꪫ9In_btŸ^Rt%*22jUWGѕ'JI+=)hMɠP;R'^8C^uRSN&\C)2<^e҅5fS7[J,2BhʴrZ&-і br6 Ŭ7ZSZ]'gC"!ӀC9\,FWL{-(s'Uuhæsӗ'‰'҉L4*'4AWTumk崆t%Kѕrԩʨ`+HW]Xd4Jf+8[I*UVu%h:x ҕrt%HiQe vUW3ԕӠgWTW+]Vu%>jhUA/d[NtŴ?~w] en;4< aa셵`έ4{r̄tIؕU n(&kZ(ir'E4!NbIMDv팈6a4/Ww tVBֻ^GM@ud@-c94jHRhu"M-v^;ǡo&v?wgW^ lzf|ۗz/m?_a`Wt=ώ24/?.~@b!ERbp鸇xK=g?mw=b7t lT9kFFPJ@'r脲>&e@GǵO䟂ŬZ~V)/HWZmZ+H|D#l &ӮȝHIt5*Zj$]1rtŸ?{WF\ a-_ rsyH_/.s-]'S$E˶lǐ<]uL{ ],7uC^_;,<_zZDp}!(CPPkwG[g{7?G%c[/zj]6UD>w~ʫfB &w9|oY2O7_њdcG(od=~e;$C. S6G^Gȿ oVj}6_~ OT++/,,Dre"g-joOw?fg1o~"~X!aw9==ofuyӗr]/ l?odYI=#lqqtٲK"oUXwǬ}~cOۜO$ߑE|C}|Aqqۛe-1 ,jy~o7p){3J%=|K()h=_Y.E7w>?)ݪJИB*7e2θKi6kŁM\ϰv:NMvbNOO4QC5JXstj! Xa&Ut(Qoc΄ 1h)EL$?'{j%{**#i Rڎ(9ٜ\?S-k߽1pJk: kg iהR#%$B"s_KAh!ИU07T3F+PtRU=QKʇ<h`Dg}u!6.M݃66𽅕%cM!:F2T`!!KXhdèiLcU7*҉o(,8<=@#.DH۾\v*6YtT:DyKP[P)}rޜ9A޴\s9 Sfݰj$7.(]cpp߬#)ɇ5i709yZ:wK9j9oP3dXQUBZ:>?'XKQ,KBJcuߺq)hK -F`Ok;k]VIC2j(UsG6TBl%E6 VA`fY0(!Q!shGhե=P6*˂Bф iLc`)bfR|WZa}ZqDg()f %JJo*|A2Ŕ ǖȭlLE"$l SKȆ4<ֺ2a>K+5Rl{ XU$XҼ1Q [(C]]UBNu~T}:MOUSL l;V+/$ `M0OvߟnƕuQEUv!ȕSе2"1wP: 9hPPG߅^R @9nD44eP0FZ%\)䌎 Y)֎ a:y@1 bu5;XfՌ.8Xx;:B=~ 3`2Бxs6LE1YIbeҪ*>QZ|MC@;fg0qVcCe t׈Bj(4cvfB@3`5!uf15MV]#YA jfᭇCk*hhǗޚདྷE;&BZ&B퐄M٤$E}>>FJP0\qih}/>|M/]7X"˴E?r$;6 PwS< pff=+[{ XQϝJ(xjq(XuuLk昴59%knĘv3@9[evz5\}YeF٤#ADy ֵؓJC5G?T3ڍ .`gMJTCۊJv0X4$SSAv =>3`y7ÎWVh'uSjCn ẟl7;YOP ˪ GʅPmQERGq3*nT,~ xm,mT11mAiomEX< bj5Z Rת%_63T;:\Aj},ޣ]ےYo5hD3\sA-4pYqC OU!͚`)o"a2نf@2.vO 4%尀d.نFp:qKކRZ7$ЭOE<` \t*M .Fgr\UN`XKEu,*f1l؊Бpԅ]0qfd JkBE;:w뉯XD;j0Bᇁ*e?`j?&zF/v/nYalbֱ$2^F0H-gќ{}2R(H$eF/MI)Fn;ۛ-?^%~b]4vyݿf~M=vwT8ob{}X~ۍ67qfwz^n^&PDK~mZmݷ_w vͩtvG{~E,vK^c^l=ɯ3ĿuKÙg k@l88֛Gʠ @!a $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zNhӧtBhO@@k#̀ t@֊H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 t@$eN N ĉz:('ti $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN  4 9Xq1\b.^]VSn{}_`ܵ_& OɸɝqɪNǸ?z_fqb\z ƥX)C ]1\s2thI=vڡLBWO (V'DWl+kO6=D%ztEl:%`oNG]1\O-c+F+ǻ3 x\tpC<b>vʨH ҕN?fᆓimҏR+#t\}a4u-<\%g?ݹphpa, -;sL?*Y˟/ ^*pF&ށqSrh9Upɟ M3Z`FݓijjMhe./׼o/ L~"ோv͞ }^7?Zl]!\Z}#MpIݡ5 +!Uxsic/6o//a͜"F3իgntbkK*-+i9 o#SBwK}÷/֛yݼsp>ޅ>QI{;Q pRBWo>gwRS+K_(NW_QuסI]} cJ{9_! %y a{03fz0ȳ5EIJݲCZH$UY7dW/_dFf:ֶ<]zRCt+ [5DWVԼWHW͂/>o?wTm+B WHWRk :DWX "uj=]Jm{zteld5NAME{?Md[4:/9^'ӣ?3. JmCbȓrDg E3`uz,Gbǻ V Ε澡ᆪ* {]NHm\烏y<7 -@N9_ț7wf >eWKHnmN5Hwau7.쨶L*.o[k~r!'JÌepNFOT2XMj@֕v:ypBZ MR!-2xK~o/cӎ~󿞯pvٚ򵥫?ah4|3xN~rCb g)FpUgqmk]aQ^-Xo/u. ]ZNWR^!]iRW6 #VukB^!]<0obpsDJ4SA޼Vl+T+,#~/ɉxx/Zʽ'<uBWX% x+SsΗ9  3jZ%B,ר&eNtkdY\ۙpBlb\OW;DWaռJ"B]^1yg|?]^x$l?j?e]b=]Zy ]\źBWV˶4WHW1pCt5+th5TWHWIS>3tEp ]Z`Pmdg+VNuY"R%^!]i8+DgVpv"t 0-X.j/QWVڶuҕTºuN0q" #sP\mt{++7&|Bg;N_ tO٫}%who8Jshd{٩ֽ{{= (9V5.ɜŅ+CU>-U>8N?+P8x*Û_?|WXOG.~pc4U:o]de1bG~.J.P?gT0j;^ E"e~IMɏT߮vx7c~4%mn;y{/C\?bc.F~;_IZթ 8t?M/߯!|Rj <jI,wg~ʛ>uIq:&'2lmUˆg#WU2HH ^VL$l3qϐW]6?UʗǓx^]NPs֪laːqEa)9S,ީM&}a8dT/ ~VfEkg8[/c7Am7u9Ǘ~z[8;:#1W3aW>O/xlj2e4n8aKG:Ή#GWm{eܾ=Q,q.e*䥕o6sdz\:Y&SZZoe9Шf`ơсkbVJ]q-e/"%RIkR0ذ6aG/N.yZO09zb.zc$!cMypAOdl^g磼|;Ͳ5|2F%DoQ;伎>E.=P4ʴ.ⷷeZ<0MLL6h["x%!cL2sk [3+(3jkJJ%F=T$H=9hIfBD&y$436aflN4ƅYơ\hBsrݐ'oTi8,GqAGp630SGF6jfFP>IA##SHQ,(c@*=!)66QCАpuEthlÌ;Li#l k7fmmڽUR&:95gP"e텶.[L Z(T$YZr1aQiX&e#r!H118aO.t:L-ؘ}ʈaFt=#x@1,sQzy`E ccm5y\f82AQ.A(C°Κ,1b$-Ho8:b0#6aF|Z:W:yQfyqY^ϋ=/nxeAQ(*9ޠsC,QJ ǒ Kh=/>/C7̇0>| []j6gu`o`'OlȁX"⁼'oٚJ_$8|= .jjG#ʦd~U!MH0bχ5L*:JYq2x3XR&29QAl4 Sjsbdkq1-o7[|^ҏ:9.t/}#M=9ُ#yxcݔTM2WuD__~.+gR *1 w||G|Ѿv;Ş( ,;cAz,3Rqd s"$ mPV߬.^DVyx!q]6;]xR%B +c[`9iYMyX@/yEIWm}nu!4?S@NoӚI[aNEIlz5כV7DLW_FZo^N.&v=j¦*~+.b]j@Q*u T|*}!+]'NŞjk:s/Z3oVR[iqy.IҠQ!+@ x)"q H 2&z_kX6glj;cu>|j,CӎP<ތYu]_*vՒwig l]gVD:T<J藊Z7im%H\??[ h1(:@j[YȲrQlۻ՞Ӊvw:iGOEFOECNfS[:\!;J%GQ>GCZ20¨,-su ЌJUcS}\$> 4)U) _HKMgD6g~5(rfnƏ cM ~z[N?LuKhxK(_`.rnޅ;-Rj!FY ZrhO }AK$ l}3medz u,QD0VCTtpnmYKE`@?܉ pcmP4yVwκh5M"ZqlxT*F;6r3(0_ga3Evܨ_iMڟiis6Ǡ]*I`a,u[%36 r]F"B`.;ҳ;R^wzY%d;ÀSV2,>נ|Ic<ċH Did@;s;p<(C-S+]I"2"+ pQA9i@%/9 gy~qޚIMā 7;2!e![/MC_/4^i`U)JP.VJ}|*yk+Ve#yccᠱCGYc(xW %0.4`S,`yĊYlv9̃u#c.3y-K$SXNNnP`CӋ7gYoMߠy /[gN ^nxO*_y@s,NXV1Փj{uBaO+aǬ^=:畹6+[f]k|O㺕^w֌n XJ;a['aSvmiShnoʃ$ҖWp7GS2lz)ԫa&896ٞ}fZTBEM)a$HO9v d)mjŽfs-|vv^>Aۇ0/h-뼵4-J3\BxrdTEL "E͠^()-V>˶]Fi]t^ Pb&3 F9#%(IJC~ж "+.y,2#::7Y8+)gk?g`::U>)n -g'aji°zY_eyVD]r9xJY0 t @3ڍcQ 榿|-AzOiQq.$H 2y20S* g:U|)P~{b2<>z;*cf^"GJ: "Xah[%3nZS`<'_TH|NU[_\mM+DD`VJrJ&g$VHlxc-iwmY)5~~5UdfR;uMoI)1Id;S6>$%!!3Dq}܋CŞaD;ywa[ 7ƠL@j:8mMZ2K?8W\ ̄0JzAi}fndp̀U xQN2-c5z@]98W959˭1i=M{mp-D@] +F6jݑ0|bݏ/t8~(>Q1v\b87M~7Gϣ "u}wfoH_~ Ŕv)4mVHY^9FEj<@FWevGF5Uh7uVOۦOn=mWk)LZqT-SiTT*6Ƙm7o+ao/"y3J:YWܡj`TʯkqWkm<~#oNzI Ñ8kA JXFLk^%tֱ$IOnjSi5dZ6.$8/D.-AJ# aJـUILNOW+>]7QӓnցS\;|-b|쾇䞎?{Lu/"w OFX27FOgۨh˺G[`)Ҿ#1{-Pݲ"Ӝ* 4Caš DHH F'i60ǃ@$SBe8TdXGAmK[i!&ΎnYg}y1b6}r1\5Bv5樰n{hPYf0ToOhN1ЀOP<"AzJ8.JNA%e'%^F11PFVG#$K3R^r*ybۮtޖ8w<XfϬZgW)hNQ{ m+zͲM͒YcB k]_V\wtpV }@/ +9uV=Rr3ϲZ~S=At"E.g>p0ɔuC¹)Ep.q,f^onGoC"a{?ݭ2o]8T9ē-:6''::=(pt0Ie(JT3ʀ"ɦ^ V#^!f0^ 0X$\%M!\^Y{A`"9ءr1!/SNzXxJ݆;A%1߶#U>(֘4RRtFVβ8iAO;3B=GxŽNU? цm=,!N^PKZ gю`CU%GpT̈;Q `Z)196$Jp{k 鴜#o~_cwkk㚱l^ B ~c* :=}CQAӷI}XiXiJJ~!O:0aߋE?T,Xo8|X&~_a?_&e^)/6<˗(qO812RP۟IY! E%$%O)FYud_gFU:{!r1ӕ /U Ҍv¸ `- ¦aC5'{>z̸Wӈ|+1ϛ7߿]5TlB|L9EƓ87Gۙş=w F.glqq sӅmKͨP+ܶ*'gږH0p~=;Y%" G#} Rą`l=(tНuNuɛEwpEvf؋}̳`Rt!D%&}̟c~L3S_7 g0Yvf !(%P F&eLlAQm(FS ThI[+@YkcaQLqؤD(c!Zj6BKАZ)O++ݹ WL)&m"zbDO/ .A"LAg@E+s W=zyFn2r $HaEφsPa0'LnUˡ{q{ \#DRAK>*PQcEZ Yf`" 1!DVX:;g4r-3AGm+RE(I,m"bcl:Iqv3̦| b{ze;>m ;`;]\,9HP*{=P(\.XR@9x>5) yeVWz0_DW[P6(bAXBRM7ϟ,ӧ*e:?]/'e:Xq, i1}\cr S¨BY!fTZw匾̢?lz}rqqq߬lRy_iNCUU>M79u J< [ƿvӐrO&j$IФb|uYfN\BIu%=<KţaOl޺~[Os%og˒oV6e\̋ƵO1ލ u\~[F~.wjUU\o?}#D=?/׍nK%|7jAy I0 %Os? $2>wRΔT: u u綼?YN3_T]o)aԿQӼ fm)n'4j";f"gO}ӵ,,Zx)e._vܠYNDi`|N>yW#ּL¼+Bk2`?ijmFկh1ws땳\~:߽R7@^g&,#JJS2|͘B:$*\|trI}<@k>Vΐ?_Pg ^sj}1sժ TbQ \`s(K62P)dY䭆0&AZ`*9-no^Rnv7"*oR~KdǪw\=(բZ鷪v֩w$yvR:W":[#.y^gi(y%ϕ 1%?XCΟ} z(h1 !jByͣKk)'VN6\AIJSFrF*%)c{E5 <沶gueHYHp{i#ͽ_%B )-d%ʠUXk$yiizp@;v" {\YH:4%䪶R@΢);kAbq%U1Bt6(ܥM@F>\޵P|+fAB-Lj>Yl*GYڢ~u]k 4uMQ?:u7opW-R}+>|$LpW.sWk[$I:v.3bUJV~~AZ V=ªS!ilBJA.`֨_i$TN"Z]WODD #QF#5QCR1KPhy| _uM@º3d `o{t%~L>FQtKj%).-&"9 ـ6uT2Gf~c(1-AV֟2J2Av!f^QMg.eRx9YM6fv 4|^n %+%Ƿ[\mrcrL?R_<:_]4*߮I˒=, Ѕ XmHT:D $m|ɹ `6!J5Qe0ˀ 5P8Hͦ?2*ba386ƱЏ"rw_ۊ.kɱοj/Oj^9bG.X4`YxMZ1z8emF ÷=JhDU :v"ĢL$]ITDqn6A1#1:ڡqc.}OA]rRER("}BgS2 T 3;*i<,JX0N<ǚL"9E#(03-<@xl:aOoJk`JDlfF8"1"qwprD(LT!yly ^ p10YʶObB8HÑ5Z22RgoTu@i!GN2, )1Vki]j:T_?j8_K'iW:ⱺc\(qQ%\JU9H%x]ΐL%OYDZP6xxAF |߱2O< n~|ȑTُg.{@Ij6kX0DA]P:v GJ2=z^/yqȅz[s1+wߟvG%Vm_@xzP4.q-lZ巣xA{m?̣*V4,᛿;<լDy5P&G%*T6I^uza4Ԋ@&e BnaOGwn'Rf2Haވ(iiNavl3{Rx^Ag ]0ĘN`N vUQ;jTlj =~u* j%z"Y+oI 9L:is/6("ȚGc4؀Z[r2%G[7΁.JՆTU%i;?m7_!yN^[ny,5oyѹe@D"CLDZ;9JYd#{" _6*䥘/aZţeOb:Wu.').4:4`JƦ|E'I@N޸s7*GZ|ݾ:!)@J^xl6*j[`BcBQݖP$ %Ehoco4Zk-M&(Q(29╃>$U !(bqtWw J;>+jHpghDVS !θe4ev̇dQY&i!LLyV=߇2 \Ayiv e1$&ڮHE,ڨJ T'f6qdFfd=aW9v `j~K7=?wV.5{]/?l~]Ӣ4x=X2tBC С3K^K5f^߶۫4B(%%K,6e&օI/ F+mHe`܇;$c{w 0@ .ɗFTZKJ&)JCk Ȑfzfj~yĚ 4, y𾄎yd*$ qϳ lA sIbv&JIgAJ*G.-SSfV~&!te~LD4w+}OW)rvˉ/k') s6OL*?H4Qd]] YMV4YdU@({ HpcCXr<^u1[.{fgtf4O)=矪IIf\UeEX}&#?j4@vyTM^]4 ~2e!ZJjSKi3SJ:ܾ4x߫w {^p>1btd>d;r KҢ;^ eMˡ)WRtP*G>KY KgT(E\ZEu,tacQWT\1`-%ipkkcp" TB9sLNFKgeN=jk}q_Z~NzR%2g}`ֈY׷wM?2bށ֡^O)ہ.Jb+HNbYV.:pjY=8YʣrUK;D;QZ!0 hlQRJX:d?ȊQJG"s1,kJ krmkܟtKeuu[j7$܎벎%e6{v8ZOࣇ`z{ʹڠvQ1d Is8_3jRl&9B:cytɁ )$xDHU2BFPBFLh[ ǢY8ވ)(.C>J!6s'݃AtklZ1wC7[n!acEo;ez=)Sj~i s&@gxuN3y&f<:l/gwxI(srsu6 :wՖQǩS5\r ,rᖜ;I=5!0jY*v6NDŽSfQ~ή^78 X2& z' ')2aw^p$L~Ma"됳zaP/9i%,"AKLLN3 %||s}דu i^vݔc̓j{xeSO[-o^Ei$4!/[wsx SZ7_Zד֛oOs06_!嬘y7oq祖a4nU;od߶W[Xɥx?~Υ@6WϚnK-6O|Pg#Ċhae-TX79R] 9SFxZ%4\[tLBZt<ƺT+;T+#jcIkAL" V籰j9[jib qR6N,CȬ">o cTPc,'D>g`LN{Lwy6w~&,9sGY3> %Yb|BS69A%LZ@1VhvJlg^Kҏ1cVA%!EJd)QΓ͖}}ZG n1PX$&j&+]Bnȼ.Dy;.!; 0>n-)|jU>26z}&fp#WX)2 9) iќ IX>0֞uo+hku:ԇO8p<;N֝h) ȘBEʡVR BڐE~FNžqڱu\SfQ 6L)ab"ޤ{EktB}hܚVw-ቇt3UUv)pCEE(ƘD/H :.p\C,G_KPMA ]v>Cm 49|ѐÓc!/r:9|Rk-sGdH"bu\)Mo2 :"sUvcp4护ysH z2]-y˩R7SUO!9VLjyd }m5^O&]sdF+ 0dJ,~{k+׮fV\. y,fH+mtRL?G35 >h2kA OlDŽ7>$pQyc5`(PimyTd^fzuj-1,)WE-:;Wb'uVFx=1#x1TKի.jM2 |UZd 蘿YGDEO aNSj۔z:C1Uw.Tw&q<ߥHoOkwߥH){9.`7d\c1WEZn1Wzˮ/9QزCaä0aR.ů;a̕cSx *:{,˘*RrӛghS`׋Yv0ALw_TW~4.jI'_wTó?>Y~vjO'K>NKK3E>AnJ e8u:t  qU=+&cy?VC:tƃI!Is&BF1pV"Ɔ]p{b W%5eF<%b%0: 6MqS(k1"8A?AVoAMMख[%ccd$#jIʦIwG 2ٶ A5r& +QHx\J#7cZi }gR8̎i.U"diR֠Vuªhu\y` "r4H]FUF@=t,uN%:чR!{oɖ'CϢETV/x?z0ƥ3SitWa&K QFVIt= 1K4Ht{uw1^\u 5']B_\=&)@Hh}}g) g,:Z‚^R\u@`Ta|@+ ^R7(d ЃdX#ٸh*–aKZ[|,1qHSkjWнcK{BG[mAC49O R|%O.KTϯU7fԌz|FO߬?|D) tw%]T*f]HLjb^ N"/+ő93qŧ^3Ѷ]Bs\וӉYGA̩.mFpWS_/?%l =NPJ+ @#Sr$hOni%Icߥ6x%9rugtYL-?f]0)ZwaҀ!e8ܖbh"N51E_Tm#mُ":1QqnLPRV )h^p!d.rDԄqHbH )<&%YqU6J(X䁃s.fae2ܭs?`# ]߲]pP! q]#>]NplR |@0'žw /ٓ=Kqxٱ}ͨ|,]7÷ /-/{pLSP\==+\deN[҃$h^pk܁ :9!evP< JFHr J}gb!'!Kyr?rɗ#})c888AwF Mëst 77xgT 8O@U놻:o;jGSHIknAYrᖜ;I=50jY+N O  ̥}4(͆Uwh:z@7>`aG/8᝿Х$̞Īؓg;@IԹ$3)!4"3 gà^' KB6://17W&/wi^v\C]mo#7+|=l 8vo UH9b%˶lS`)v5,>Uz1wL}h&O3Ǘӛ6+jC>x4myb9-t^^ߣYfv^ִٝqJn8[=^YMhaE(fK玖xH\,oԺͼi8dz.`vH-jtun oLڣ畖t2|ggΓB/MVϿkNu{O%uq95honm^6V_zo[ opw{^楲oIlq$ZuؙD'>&D-<3:TT4 [jiR!'z,1+&&ؒ 8ˀ J9"; W M:r6[VR̵N['u^ bNLݭAyf&! ں!«({2}"0gs 5pH^R'c%X6.dhG*HJYTIIXNKFsV ^0*+'$O_cGƦ&8i_a}ӧ{1A[KΠN/&0YwlP ȘɞȱQCㅈ! Nۨ*(yK !CfIm4™S$ @d譔Կ^d蒏Yz߻66ڴRpB$eeFOMlrR6Ƥ::Ǔ!8T%e qL{^:BMt,upK$:jGW sqϚT8RvJ9YJ:y CRer޵9ux MZTrlJY=.\(WXsI+9* MQ|1Yp}"&ȱx'[嶹gz+9hc+p:![yn(J^j`5O16YhdxUA*>9nMrzSL`k o`E~7\u[mzm}E|Zq0g8nz˾?`>wM9}ӫ'_E}]ʁ[H<' :_v Ӗj_Ԏ+ͭ[i^RIah]exQSFϬRFo 7y2$qQk'[oVe(k'>zut/􍁮ͶGZ^ iтMF@YmG<q:*Ҿ9ssZu*7*1D}1)-u+pbyU HV 8fW)8A;3*Kp3nYZu`r;9՘gtArsX]7lnz5R y)'k?mACY+I&zCkD)Bc8~CGx}׭U;6ڣIeRdJR6f:d 2(LP[BN&(VNxZ)UZ En#A-DRC]ʥ8cH'N'GVԼ%My)= %'։T7>hi+gw]C] ^g ٷ!ҮV9ZM* BxY|~VYz{k4q~گ6aBʠ=йbx޾y%FRo'Ë2{fMѸQ>`ՀA%olT`!eMbQ;e՘jK[5Ŭ#PօM@{Q`@'Nh Ѭu<02릀}*kcBHOLH[̼أXف=~@blM>}s~h茁[tvxk  PRcR5{DR, ȼ}I_cXȰa1In"/u>ZutlP>2y"DD`.)a .,zZEoDP0\8{LcHt;ŅF;tx ۇm~FnMFø=c~h݂89a5ˎnk#q}rv8i Lr!Up ǹJ[,!e -s\1)+oE1&j-jɭL9-tt-mZ3ѩuSbkΆWZ>!t{& qEwPtA$Mw}،)x (XV@=69JpF++&(sx( zճQiCYbaNO4kϭNk,qσU&d9x?N`]ht"Ӫf=^ ,a4u➑8"^謮qOS< \uOg=>PY{C72F .oꈭs/n~n կ^/iDà_}7%Dp3-WHwaPI. nCi'j ;NşY ? Twׂ&fB fP 鼙<l6TFt](t0Mq'av_ſ9G/p[R#b.3tk\jԗ;sl3FWZg;pO[@]&Nj"k{zS2aGg[]%~6&nvQQr1ݰse/BmnV٨y '7}bKQ. &8WΏA`AtR5[o*a.?@JW̐d/fYo@ws#Zx%v1iQ ,8,wt!φ[Z+*JO^^]Kjs㉥Wj>BE\x gO3ڒI}+J[z 2  0M~K/_tUCZ|}0*PϓͳL\OZ?'./8?ϓv~p8iϓR0%V2$IV2Tt-84M<=ßdZM P+ZMH֭Юݼm'] /O =]DT:mw\ >?j} 4οִY`W7˫x6W Sq5 ހ.:7[wq@qp3.E- H/OW$>_&|rQϬ9uB0ZøI43Tp 6F DJx8<'R@\i( l :h2ĤR/te̚NI1Rs er ^Qy5qpުe)f7N/Hp.c2B>rmRY 8&!#]p2K4EPQ1&rC+ :;Ik^~ "*"Bj&j2*/j^пfJV4NoNʋcqU'oݩ6\LTIwzМl8org<@S-㛇 CU\划gQmL. Z*$ }%LGuS JtdL.9m!( R"nd\II֌٭ajx.uu![Wo8_l_abRG~ 0Mw +IgC(MdUgdADD)1 OLR`1=& C1&cR)MFE 6Ls> Qyf]¨85vaxlG>j@-X;vC ίnV{XHGS+"vV٢[5Wֻ,c#L+ݙV)sʫ̻Wg'H~׉M %i0+gޝwF;Ǔ7VMۧ @c=xJ!޽b*7:uA:Oһ~?QcvŏLO4o'ߵ/nSruwP˪nJ~+keBEgapTNxW1r͏'WoA۹v')vrV1վk; ;PF;}PO^5@ b*N^ܰzz." \ϣ.f?#*ywKfGZ|8S-"Wg7ܞqc9x@\ѽ.eE0'LS y l8nB:3PϽ ݄ˍct8H"9D3Н0;Ν0U;e2WGH• JZ3̲HU{M.tfw+~d!fp4?`C7I?1 WTmO^+^pkW*80RlF4w\\,1W"W*ׅQpj;TVq/z\96di \`apry\+Qs~$\`'T1+Q+Uiカ#ĕ`h+ /ɾpr0R8{\J\!s~ \aprFuaR,:B\ |ϓ2qxt00x~C'X1Z&1~4C421L0'zL^3>yfRs9GW*M_o*#>13xy`C/>&xOij݁i*qf vmzkW"Y? T.Qpj +Uqʙ`HJcW*7(Rl掫WLj+4[@RJrW6;TܶZp"p{O8ɠ >\xZܕY ^Wc7D0:W*׻QpjiR-:B\!ag ę@8tg{>+3V(2ҜH `Z0dfiU$lji//ìϓ|E a '0e؁FUը\?\{UvUpT6p%i SaI% npeޚS]M(\NS 0jʙbqus킫]^+W"ZW*uNTn.UYE_j:ۮ%^ԿSh Q>|V[_>]K )WoEY4}yJR9c˧G|2;3گ0_k/Zg/w/ړS}rS]Kӛ~ ؈WF3#AQ~! Cl nQ#oܡnΟW9R>ИwBwh?o|'o^ҡKQڪ_z_>~S?_Nև[ȝKҝ-7jPm#Jvd'C3':{?o%Y:}R{^_~ztok%Őz:Wwr1c"P#6D71yg+ C5Gqvv|)#psL U8gR5.뭧s:۝\X;RϤ퀷U6x뤐+{BuS% L6b%xP;dS7(V9Tݪ#ajԚDeÙ!V\E#b+Ũ:^!K@نjjpj o~|'j[,}ٺ@CXmM$wYo7Ԍ1JrGCjҙ$cv_BqkGi{)@ D4H0cD;oNhuKIك`wrI4*als)(0ҤWyjLN )H/Fn]ÃAP@LO@&~.ߴϋl8;)i/H`DyJƜ|1МMl'D)㹹Uh*@TR=L$ڦȦ r j]y<,|M1 5[%q>.ú V$um|Az1i(>ŚPMjkC!`-f}$!WPU':.Wc4C:Dk Pq )9!fLAȄT5b]Z`[BF=5Sm2 ](i$2OPk61DRS,Rň3{bGM(ZOY< ]^k{8k8QL[$V}ȡjUʇ@FʳQcMҷq1Mtĵe+ qIբAAں^>sͅ&\vN =bXIlѰ֞S3UqѶH.vkjU&jUJ咈ԓ-eXȴO]I%Zٗ3IZT@!Vj`ehuP# M!nqbV#f seꛁeqV+Y\f0\]Yr,:M2l zѻ8!]y#~ d(cX5qowq+Q`K" a:zJbHXA}&icXS 8\͒lI'tX‘&$}plb\  5S!uLJCܙG(I lWqL.v_JbXH*ՙ1(e\H7AYƸb%P k͢ѳMD 2|IZ]m(S̒u~],C1bJĔ>RCzIˍs4dŌ)l^GR(g)6-Q%DF0lFǕZ%?6ܹ5Mnd$}[l˵[%.M5ٻ6,WcmZ@0`'2`PФW{{.IQ-ʢD[ƀcQSU=N7&"9ih>(ؼ($0D!N&}< ئ3AW&WEhYK.Ȼk*赫3[ݗe&}6}`-z&DA,[oT^БV%SdIW}HV*U2P (yCs ~7XQ|E ) >@E&rZ5ȼ`|ڄLktq7X1FZi(^Bd,:P% 9ՠhKP-"V1hFy¶e@TDn*BIvy/g ~^?Xy'K }*4ʘ} ]{ #BK|u)}nuԆHT—Pw}C0 {ۣDTePEx(%%l-ѧ@hg]+y r.!z-bB bjwXC-D{Phňe{}65> PТ td!.h]`mB$1SZ(ͮɀ!JP CEDoZU+%L0A7lmRjh\ cs^y63.} OӤ[ަ_̦mù*dU ԭGnJ''f= ]Z`- Nd4QGmC^k Q$I"eCkBm1&' ==6AeF쓊UZAIECrh[SP.O(7"fh8(>D+YLw]zʠUbF =>A(Az{ZT}zT [bcW,XB|]-D4cA$\s; 9,RAy/aèIP(EeQAR܌EEH,zށ U l?8}[GsBA qt 1ީc,./0zE{qJR6؍f\]BGM%M&fVM:~n]6]\,>mB'M>xl,s9t?Ѷ{)td}qrڶ~ǶO qbxHNC[)c%F CpHRTGJœtA8N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':LĜBO 捋VwR:E'SV;v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';N FvHN v8N n0N ʰ@; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@'B1N Ԃj@N 5y':qN @2~[b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vq@mGEEcыՔZ^߶7ׯ7 V}2.`ApnH%j8%B Ƹq (g)>zC|Ho\#6ϒІxtQN+< uFtE(U`:A9b0"y-FuO [Oa:% R?כz( unт:;q~ZZ\.+Zx1_/Ncǐ.~nt9[ gII J7U7~|QOl)]-.&ˆeiZr:?^MdκQ6M1Aq/nگEz=e~~YyʯZZgiC*mM6ݤvYj)i۶DӺݯ-#8MuSuY鋱{]uM \@OiriDcx?WY`^?(nFљ~~_,9wj3Z.zkJE>iUE'gjTe8gMc_4vk6zֹ3).wz1PAtζHk-:uYPzgtY5һ܀h7Y! (LWRzu&tP=rڮ/~p3om|R#+])4"H= " sC+B\CNkeCWWǡ};BtJ']CW-,}+B$:2] ]Co]`CWCJ5;Byl@oC6lck^ 3 -4XAe8uV n{EۚB3ly8=keZ)r-xaYջJUkn|]AgC6>_GKл(oſ^zUH|\]0zS1veg,y-w7g7hL[^bP7wu?G\Y]OQpU>)N&C'˛+{p U&l \ooN6ceNx셶Ɖ .9HMo~%ݩ߽L򢛍ϥB䤌U}{UekriG+7Bᣤ<jNystsJi-f^$J|Vu͙0:$eݸɜA"LH& d3cNk1FMm0[}f844.jOV|JH^^xP[4_5R7:2o;W)r/ mov~L7|6 7HB[mtY}__(#Zw~y>UFe]3:CM 6REQ-dREܩnЙt2sPys'BXGIu,ݯ(Ͽ<|PY>DrFN-5L9]E"렣M(#N]\ͱPw3#hŒTqsLpz&Rn1{>r,vV4+1{Kr <%Lw:i8cگ)$5:]iE M%޳5彰PP? WUì>ݵ<.vZNE,U9h]KUj%f1!I\ha G.g6-6(LIDո.!SH!kmMA|^5j6}086lzD+be\ Ur]}ۥȭӯ|jRr Igs"9WL5E=PJjN1U*D'ĔtGU#m2}a![2Cj|7!{jQ5}1> M2}Ɩh[DhN[8xˀBnӶ=@Ʉ'O%8ckJ$[dƄ"k,#R/(#.T-Jo9'iGJMHJv$_GY(Qjs2DѤ>E, VT"y"}֦޵>#EOwGR{u\Φ4`kbKh;qIlQeʖ'!An虙I̤:i)JMdyBr)V8P1y/#D驙i4.Y5|?ŝ 칣hiXz4/-Ssx 1&nXR `5]E6zr-$8N5+2^]O|[U/Gc7otk"~  kywV N.OEjΜrŲ#  U#ΠI}{L;&|lL˳aۍNzQ K!_@J^FeHy:'=Cx <3qxVsMYɼNFOuwҳ QO f9H6 sIɘIFo\HJ;e鍡jLyp F,7*oKg tɨ3r 4%%Jr4ՆaFNa1QW@Do XaoTs2am< Lsڶ>q: D9:$lp מE)&`O"@ +b:˹@2g6u@NJqK!㶙gz39ff21 6F9x9sG@v0xLmp UmM[lzky\I &P t,a<W._QC`iJwO_O[Eno-qGƕ ^ti#6¶u+!3A*U/U ,⸓{%MB8 X&qQ⩗ Wzem_v=— ]m iIW~4'w^꺿^!PٞV8]6!Y˕2'o!]}VAٜќj# BY()&7bElu^^Fe 3NC(w}ګރYVӷug$QD_nePveYs{tX7+o˫x%Ruj!z[7[I+'ȺU|x1uYuAvOHYck犭 l]?.ۻ;:]ّf!,eѬ6z^6ysG+-d1Q `)Gwtoae[,,|fs"SZ7Kom&.Vj-mci}ˤ7EzGI,BI_X Q:X*t4D\[gJbyv*rzǠAyrIhTRdr+=x-!'ѥ?T c* u2SY ɂf^@9m^X1C$ 8\tO3rvkL'p~|DhX",].., Cv٩+Bbm5KzrBbY3QzJA'MrYmbk>$6 ԻuV$*D6>P\U-ﲫc:1&Rj2HL|F#.ҨNeװy;aEuM 7/&;^tӱ3]K37L1ʏMC*pfc҉`UpyV? ċ>QZz*F=VL1GEJ e>:(0%@tDE4pr2i&>`Rҁ6!mxQʜ<>?n h't hVg&Cg>ܒ#)twFX";_.ʕGId>\BXk M(&.H,}>T|Y p1˓66HgAD7*b֓$ev`R 285Kr93P8~ I~T=RrjW/eK %R #Dd|" 'B SL{iP7$8QJ;hw]{ڔN t@?.F~[G}2o]~U9ijQ(/yǯM_ʑ|~:=(q2EcBSƵAQ)]dR΄Vh\ʵ3T$OD8DkYƧ$L/~H\5&e3L:JN5de0)ީx=^l̰=X7%"pG6YT[bwNvrʳOob)gu'I>#RYIh1,WNhzӍ̮Ρ= Y.eRo~9dvg0~]Pb\nG t%VUV,ɏpJ? ~Lhqk7Udy4_gHAM.gk6L.YjlxVƔjkTzq6YiJ#'ZuG,8wc}%HDWUٗM4%.'7MR:;>Q{[[%Z9o5:"SO63^Nf!"N ٺ|=#'ߏ?WGXR3EE")oލ"j^#gF77?5x (%UmЦzp{z^ObZ~[ pdC#z7P_~_ANjxLl\`/~Ԗ#${l{NѲz 5W2@<4pOo~Z}|SA8Ee,οJE!F[vb?}8v`ER4r+Fȱ4Ҙz6{<ݚYʳg J!N'rbsvx@;7QkYhbnȕ. dBkކD Q 'Vc6dvzymPY~|wO^͚g?rg=~k׾eEa(*^W5M*B+^VK¡eJj->MBxU n%DLpIPU9|iQlI(͢ka`LRSٍ0,b~G/Xu3[Ve_vw^Atrrxu#vW4 'fP `X*{3q QT(ކb X6}f.]|΄,J[dF0enĖӲbfYxwa0j{ǻ|t93jd[*'!ga3)QAfvx%8HYtkJ5mJ$9˂4Z9p_u` "}1Fĸ ₈wwΜ4%+O01cTc"ضjh $*cưW2X<(EJ\51y@Lq9v9u=qqS= _pj6[ ՕH9y nj)eD AH*]R_p1pq.x8;C>y/u=F FcyO+?VaF,) Z(W*%4poQ"BPshKK[yD_OAp9f5rJR(SjknYS ^ (so@@kζu=77`.F᫻B6|{aE#ث$0k{,|u$_{U0D_FLStZ'Oxf<}<6ۄU+%NŵR}7ZY3߸3;"{;qD ҲMt-~sa g+Dž[ `+qiM K'F7n#߾;+IYH*:5\ -Ňb`kK>`UjoGqźNޯ  g[e](wAݓ <`XaF848¼q8聽gW`W\χWhWk+W,J _n.Cn-[%+8J &WܭepխE;\u+/p ኼCWj08Rs9d[vpV_殞#\97Wjpx0pŃn-U2!\q;|0sW0ne!\y0akzbfAƕuaWGon6x)̰#^9|eߍ9?թE> `{d+2"][k`/Oj~JE~Xk:U%}0N럶"\ؔluyWMw~ho^L[Y& U~;b6kSgo4dY_ L(SLM24ۋ/pak.g ~=btKX h2 *D[[Lj$pku'@OJ!ؖjOD)ݖlQ/Ë>#I:?^H^Ҧ7SG8xI/Ϧ>k3_uKpRn.D~Ϟu+2[!Mu\us]pխܚmXU %U7U77 Rkfحvo螏3>|kd.=uìO >JY*=h}=DW'ߑ0s# \TaVZ\•E ~'gok.yXuԃk_tU;|]?^_pSVX!91NO$Oc̀{3Xm{gK(` K͏^ޣҀt |+T=)$)â^1\ONOg'*+`}25oCrg@qt›c6t`ךjel0AZD"U͔}lR1)a/"1yf=;!{Nvo26|5]00;*TpmE%+4lehV2>ZYsu-v_ive3JA҅0W{@f@o"t^|]y(?3f%#2?Ks9]oTV`1CpH ,}b'dI!A1=áR . JSsKbKEi%_ cphdfndUa0 `, >*}3[g\\ϳnm/OVtrryʈ4䌱4 'fP `X*{3q QT C[ R# =Kh;l"$LX|΄,J[d#0sv#s.=;E0 ]q>G_`] b-k⳰˙ A3P.yǾxG{>YVs6 g\iݎuźEpKU?zG0~z(J9ip=hJ)zSoQ>%dzGw w^G^we|#E#bAKJftL)p DD2%4@dc PA 3Xt%E))}55謩&УM7<}Wi.:l[Π÷N3Y~WqI6|{aEr^o5AbD|58%\8"갑T%S,(O/? ͶLLT\,snBdbƝٽcm܉ Xmj%sa 1P["E.ec.iN_qK_rqWSoY`9ɯ_o9WTt:k 0[D/lJ醷/́r.Okw`#(hZvIF jh["Ha⏬oٰo {tlqv|[=͓v$"rt]?:})*HQo,~y˒>Q1&#Gkݨ} =?g9(a:ݝ38H1+Eoz~h9.z~Ijt4ޜ9)Jf6dgh[jא#,ScIMTMZjkMA^c %L}^DWTZha3ߜnɱfgEUs[6]]YsWTn)ߧh9Xn|bzǏO)Eb6 &VuV=E'Ĕ,pj~\]:o1dؾ cQdZL>XO7IZ[Bet=EgRdkn#WE %JT%u+7y^W @%ey׿>)2IP"Zopg I޸2>ÿ4 %'G\D2*sq:h|ȦYTeO##e}!uR'w&/ENm b5U "kv‡t5)A&h33]L-g{t](c Wl*` "!KBB`c{8pş}?x^;*6 EVYkԡ*6+G؟RVj+0br/3Ci?m}P;!t TB[m<5;;o>}VK.WR {b]*RRD/]ֹA( GeSyv:EN RT$6򷜈ة3&TÇ2g'!Z{,)}&akԺ1yh)3rUKYݾv}QsHއYZ}B-rs*7>]fCAT̠#kIcrIYpKpzuetue Fɗϓ~^/'w'A7._7q;2}Хp=lf'Khnz&_,|\nuXJfW erW{qR%;j_K,SSJ&Zr,a 6`'1xLrGw@[m#y' c^L˶Ж6{B̝;يjM;MH[^oPF[B ))u4MhhRc!JҠva3\X1[UEL-FPfNB15ޡ 3y&3t2CR_n@gKp{޵=շ&ck{=$*!;ALa_QEhp"*2(YJGl^:XkMd ^V胁/exxqb9Pdb$drIZXE>D+tn:IC0R8siMU䫶H6&1iӨWy/_5r@l )wVQ*X!*LzFwLUj78JɯwJM>變;rj){rIM-Nh_~>E. 'L 9sdvk=%WlЫ튝b}lO~0dX;NLB|.`|Iyп_{Qmi w7po2((fgR п} E{H2[q7_ByHIC- AWW/Y$6&PeoC6m6N]R+[>~M{VP4_9mul6 tS֎Nȭ[jl?u; 6?6ƶ9MO/s2<Ӽ"m N\~m3? ;ёl:rˏy1J{.N̉]M䮷TZ/ݵ{CwE1omc]8i{HC2~?#z&-{n|vQ8;8Li:M| rK|vR jhԎ$ׇdGcDI٢G:f*XbN=PкS :&.ACgYUZe7eoO Hg֪r)!(YgResXi{W3eE@Oq(QXTr&bKD 9q$B R =y z[2z5+J$`c ̔Wf1hk,Sʦ7[P \.*)Pcn Idj]Z߰kLXp&]oiPI>VK_4'#rT@kˡetUxA`OzYMϱM;O۩ubl}0h O85 q/ **:5u`;2&uNx *s2x@cpCf檲|r&L2 wPػOM"m2jptd[ö +Yʘ*mST%cb6Ƅ >Drρcιe11 Ѻr|(G#cϕKH@hCTj\sN2¿_Kw}B; 6@Md =p^yKxf ̽W`G$ Gԇ.5p,"Z.ЬD?"@aEa`gY.C;(Gt Cw3B أ)uӁUܑGr&b!2h}u&Ot_t8!n1eݛV[<}}WXI%G*'ӆOy KO^>Zr?oґMde 5U>/tx UGE=#h]鶹u)0@FyH%~yVn?69TUgMDrEQDr6~Q?Z;ujQֈEYۇ@vDZL iΧez_0 Hrg.F KM`/aD]?v`=斎k", )l1sD\PNpc&bhU¡VP- Zmk q?,yMZzPMZt}c{ vXGNPð7RN~ξ>DRPN։AKieF*ϕ`tƖ8㕥dՔ[r+PUޛ6B,YA+5v7oPQ{7V87m̻x)ϥ:=F: ԥL?ڝb5X{kVތJp2bhuL dAVŊ ee؊՟W,tHoiws;%li?}ך-_GNЇ)sDs}u X1ţkzus`ƊbY?|Ud!:XT1IBD%rEƎr$Hq![L'O&H2Fb3QUc >%UBu=(5`e'(d0BV%V˞,ə`\V!ƪh`B;9;`;Yjmyqz=]m7a5VZV2w1oL/ jaAL6fҲ07]LVՙ}UűFi$Wƚơ+#: 1CN5d#vG1T582Ps^=ёs:b9xV,\uxxsS8h3؜U!( (L&%`I5'ڞ3 3O1_qf8}轾] MERٹ 9BTJ/}rPcp^mB >+840QfvyȁxsAߞBCp9U>W. - ""1':9MYOrKqi%Z{Kڞ):tժ\5X|- Nkj^wCyᱞ>I9>ZN?Vt~zOeO/&u:*i.Hyˀ8Ax32Z4('0ͮ ơ'R/KMn-қeIMN.n?yz;Sv{A_>~Fʗ;> n wτÙ f _:iRbD]?ZzY 0tY|~VwW]woѳu/}M'RkQ5wm$I~=S̈́vp44)+ _I8oݸ.{ڊr3+1/$yɌ$KsԤMLƹH:iɝIC CQՁ|]ɍƽ@2:\_`UuJ.C,h.l=WuxeW.H~._|uPO":!3!o`C!Z *ߐnȧ)#TƱ8NƗ\TpF5/j40; r(7/o f Q -(5|V\+ݧѸ(}PI7k<$,ƪ:`:I{:_BTHBQgW|rHߨ^{}=w~w.yrBTޠQx͠8&tthi1,5;KǏBIEfXX-EL 'dAtMt>0zys(^{/`v{fwUlSUA/{67,%`.d2hZφp5!]-Um%' }6 ٩ݡ+:CWV]/O Awt`U;8t\KKW;TvC[FWr=]m+lTg r"BWٶWHW2mBB\vm+DzztũTu++;CWXҐ^!] F]`IMg vǺBRHWq!L [ڮ#J&{zt.UStp ]ZEZ "Ŏh=]2ҸmZ9 dHŐSc Ӛ)qfF !w Ftami@iI*iZZїA?GrAeWD{}^t½fRp}!Uu1C10! e8U7&UhmGU@) GW8Z.+,% ]!\MBWnZZFԖ]N זnAt\{dj#YW-YJtoSA]`%hg ;-r7ڶez&tCtń$t.]+Dct(kl=](:DW[̸7EWWvDZ5ҕ.;++IW Ѫ;HWR3H;DWR3(:CWuƺBZ#Jm{ztKtew *B+DiYOW4ئE'T 2zPra`ݴdzKc8-ի|ezU1[>S;(}Gw"z13;2{@;Pe;`Fxg|6۝|D[bB1F(Ƅ]`!Bum+D)lHWV өs|vkIg -mu(YЕ޲)Q;8a'yo7HR;Еj߮q;DWXpB>s7=]FbBғv;MFE+7+Q,f/~89q3xv5LfRW*P\}kNzRP>P"'sgxfg^wr@e6,;QrH(Tӷo ?>͕mQMl658+?3F!Er'y /y)Qzg^`OhAs!~OWFX0r>>{+^va]_FoN.ܒ_i?V?t>@STTW&Q%QFgPE,AR#mT9Ld),C˚C{{6}Ϊ.O~a]JOR 4_ \y.|)!yAߞрbw<*b8xL ]W%,?:c?09@Ie4R$vX4\,#A͖)ҷi>jXt2TQb})!CAyӅxf\OA &qA/}"3.ȿqyE3.ֈޤň7? Tl<,.>IQfi (,IпKЧfЪgvlPy2±eaſjwſ8oOk/, a&rBFMxQiGLΚ'4pGm ,Ct&KI 1PRrR!ehu&UAQ"r$ŤS&^* xv)F0K]FR4T,*$.Y'HV,'! ќ8em""ReDk(!Je)Z0T@#"(%&i h<#~Hx,զXgcR'/r,/.y ς2ij,0i4IF+Ӡ"@4?H)oTg kXuzVw̨5263GuraߨMʟ*;Eϸ}Eo}#B) + DmMMH6(gV[LaD AamXo?Gkt}&u>Y 7ɼ+,f.Dk wYFmR*.%K pˤbbj >ђtQcp+kn+Wܩ $@*e<܇{S!s DjHY8Nq9"A< yu_KrfM_]ғ+g~n!An5_lSH6,R n:&A:)R4Ϝ ~(c:)^ rTge(S:ĭ_ Yٶ燂BN%rr#2d o1M!iw{'ѹc#P唦pg4wʚ%.X]ne'n<*JBe若QJ +N9 Q?NڟjkjHMkKg3uO9 1ifΑ^;HVN\.SLR?6T勚7LD=NIdλsNK "!y;eFxh2=2]G%[Ϡ|?ŝīqʮL,hJYXR2r%aN0YWqOL|ꃽC/ae3xwL; .LeF3Q7:ԧ\GHI:TҪd5GZO>:E<M@d)!+Fѱ4 (?rIP>Eb޹:fJ-PH9ws6dVGPD)[_oH#Y1HO"VHw욫9+ Oyhjw6bdi;tL(X<+ӵJ}tf'+'-i"@]&3Ep㩐Zq CO*=rƄif31&ov:\>WP{$yBtnI,yW47-Oɽ D#Ӕ1Vcrz[c&'("fp+VrFJ;%ECp6֐䅋K|\5΅y. fa-_ m1/ٶ̩3b/-dOKMrG,l !iKf%1fo w.rn'G1q Q(떀J`O9*gm!@G(I 2 ˹.Ed:ˬT<km{хv"}/B% }Jrjm9>u^ތ_%Zm'y!eyhzЏ! ND*Mr9g<ڱ>qxybiꂝ ܳ/}*B"D($ 1UF$P ܥCm꒯B R3M)zɄ똔Lcȑw@8PZ4`xY;+r^E W"Y+xJ*-($2ykȅ~Uu:|̻^P0f(<{|i]Fmyp{NA&e4< t*erq^6_n? ]L2xf%"!DQH 3L0^'Q~b86z(}Mer@>$v \,AFkֵ,vC>"PO%6~Kw7ŔvkhC@8C*n|H@W^4taPMfi2~z{8юFg(^rf >I)_fMn|rբ'p01V,K9i|GÓy#^YLhN?}mF꒓ g2ֿ~maH3EڸRj~2>ap֧ۘw?CE@z6(jǛE@he(GM3ЅRoEa_PM/zc t]5Z~5,+zVmiGث4/a"6kmKD3izF?뛮wryZV>T1FV'P4< CIꙪ{xؐީ8SiU\!FiuNq"SX(mTr9kfI 9j$3 IFEȽvz ,DFsLZXԖhKqj}P>-s; h@ @c%\YLC 1B)+MţT܀TJ éW !4,CT-eRh%_XΌ] їRkjY2:efJ1ϽУR͚\*2< 5Ӟ{թ|:S24,H XNb@Qt@6}4)ܕY<ɺ~fXéEȚw 6`nPg J*zN]#dnfE_̊-mGzwO?2 g)S$L[Hc7rYuٯ <ڟBu{%} $*&=`{EAg׻U/yы+Ծ^Խ]n-c\/*UP7fNE3KFGׅN^MxM_T S3P C$q9u`h i R'TVpL0칕\h^68P;T|]^pfMD/dEsTpF-0d=.p8Y<#&ޠWZ8g4h/6Qbƽ[63r6F 賭}Zw i*K{xLa5)b B_%WrZह|;XP*8T^I{34l[3]!+&=Ir8Wa^{7RK)$wT'b{yD  %I;KPc~t }5I0Puh*Mw_Dfݖ? Iu0rSzP ~}L/~ESz_$7voZڼ7uQ(Fǯu0m&P lP;Qѥb[R= ,<BȷqZȔ!<43QGٯ^.?eu(k'.vZ] ]m iYFzQaG4EY =#2.OR OTuUd&2P ^y +$ۡW ֋\uK%×P!:eT#1$ᛧ;⟿\۔Zށʐ5B M0kpבJIX2LG98a;孢g5.¾hxz_|dGģ37;2K(vm 1Q!:3--u~u{WgD5KyWh4 "; O&>ȫOjy5|/JUTᘋ KSX`Jed*en]BO8'#\0[LF8ߗG>{O`oy&n\]` j˯5JZ%hPJ"xc .\BT0 qa蓠M f8l,Cr Qqր8 6kp8^DAr#U) f}L.h@Ȧ2h$OΊlmY2ZwZ4zJuoXBhs'x8fkWd -Hܗqo'zׁ"˅zl L;/P( 4;3uU%$ TpAl~;f *hPr6uL8xRww3o54WGqO1Wtsy 3!w\A96)dכt;clS +&iA'XGUm&ޱk3f+*|ۊLx?dc.߸ۖZ[($~oҾt/+I2EcBSƵAQ)]bR΄Vh]ƨB$##,7b%b01 18$Gtm#5jNS̓$F2иw)% !&nbœ5e)z[_xr٭ )Φ -<-^'MoG霂:D'Q<r(Ul"4hv4%ZxvC<4ԳzޘGK#$FNJI`OF{'P7,%(i/g'̃ c[ #w荙Z-.V$Hq*'$(ek}JӘN & >!rza}P5zӴX0z VFѮgv#8@HK qT&,A9/iW9g >[C) \%NDK%t0D!B{e1oOz+ϴ Y_{ C"Qb8;_=hW?V04<y69+NF &gOYnY W$ԎUcĦ<0!/5̬`8}%vkOsflrk`mT:ͧw IB= A^XmTrjˈ挘OBlfٮm5*#?_`DʷA1_imӞ~MU͞Zih;(\ذwF`$&R?U:'asms>NAvs}#,''H2M'퍀2M7 +# +.+BPjhk)B\ CH;: 4y&h]xAWCϸOsǔxq\5}[״p_,~`E RʛXnJ%M 0uAvc%ǔ3rnp>/?5ߞ鯨aT=nh|;׊y_tX7S^A_լ#V"{ryettrZK&e~C7#t)I]A6Wo㯎X&s fiu柊_7s T-բX rԔB+_\iSH. @V: oP2v4Fb~;ksubib&n?=_I_+rMu0Z%h;;=G͕9_|4h1aL߬,' jV[fRՊ/ 2f4Pq^]Hf&nMhk11* \9 Z6VYl-;]Eo"],\ O]o_UG<8#pGEra@=&fmI3>'w!!tG^;yңIm/˞J<c H&Y&eob2EIK@z;!=aڟgew&7e;u;OϦ0MqTC\&Jn`!7.6k֯>cc~ffS)Lfjԛ»{Nmt6]= 0e<5/fyp~VʗEM<%-@/*dp@0vE7b6U_H E#@C X'W1!)U9GwC(rΉ6Tfkt_ZySQGnH'S+B7theMIDr[jJRn]֖} 2BBtT]!])dK֟:U_ T]#]i*i sPxc*p*e}z3"z5 F?`+_M#ZNӈR kv{Ip^D`uPcyit@]l=E:#4 R %\̦z ZOiY*iUYKsi"!mymv~S1 砞sjZ~>ycI*-qQJ=BIFl~VwÇ|cme6jIefXZ-E;,N>-m N~׮zYƺeUwAs=X=+,YZ.}+D+:o( ]o Vu7_}c-'ҕ #B=ZBY` Qv$@WBWRpE4D?դ/th:]JM@W{HWJ(۫ V7tpuoR9ҕ7ѾgXci9Zhr#4/4hehz/iZZ ˳P0=@Y̪>Eɛ5}U]Uaè>!`?#ˉ ]!ZfNWR 8٬99$rgmX=[eMjKorK;#tU)]O:]!`#zCWBt%]!]1)%=+,Uo *BFu%똟@WCW\Y^?¥+DM QʁiJ藿CdtpUo& tt%T~S+ yc/rκNWRJ3>ҕՄɺGA+tovte|}ovJ?GFRi:Ah<.A*_[&a׏?6Bdq: v:ѯU*OQޒPowSIg]ǒT]=tx< n8]W"L/V/Pj9OU4=ܸ/&3UBUJH* g0ReTërSF5 J6Yn`$U}sGWYܧf5 JHmq 2_wQצp ˞ޯyzC7Z?**[=L,;F@E ss~Bn Ӥ^|B?o4HHg.>o3h#zjƟpWxx y*cd#}h,S@W ՀyL/ Xv4biVъd׀[h⭓E+ᕗozXT^[$NBR8'.Lj}d!r9$TdMrCQd*))&TUQI Yj ,Z@{+S) l-Za %K:%H(3xCmJz3,"QՈVgs.{g+ZpkVLu!zƕΒPJ,,)2Q!ZI-b@eň) 6fAYh9zA;GZwπDSfa۾)2:↕gP+ 5xq&A:sz0h&U~VA$w1gsY;Y@1)xC(̨qX?#NF%V:Rvs\ VIY(]p)k))I0G#6˖z Fg@xA%2C:dQp&Yq0`w֌sqZZJ%X|V!EH jF"KbxeR11i5eG F@$`A0NCFP0JD;˸Q4u\80^+"\\  ӖfCёDa uL0DO3& ʈ0^zx `ZYBYLKB{[xDRXQ6EGZ+ _eO-#띊KT8S`UW3{)0$uӐсhnekmʒM] . ɗФV< ߷IQt8Q4鰐 2(~FE;u)bMDs|(ؼ($0'D̿h1`V9L0rnZ͔n?NtjV]`!008-A7 /ፍPm8V%SdIW=$ B+\e ,Täc*,Ophv=k=̗9)XX茺p4M"LjyU,}ڄLk(t0X1FZi0/! O/d:@B@vtm^z ,\?jPJ4TU;+QYHև\d*c+tm m,zԥ)G{x'90hT—%@_>1&#TϺ_I@;SaʠvPJK q[O @hg]+} 9@5^!B {!=f bro鎢gcyBP @$B2Б5;5vEQ(LMF"h4$C VAgUkUGypRB]$QIY6 (jS,ԚϽot{ғYF& h8Hxnam@VӥSoU4^p^`!-A;mU($`>%L0A'lmRkh\> csm/,C,NKӮA^ Qђ?joZ 05^%pgϡ@N#|tAI'X©; Z6Zش =+Bb}4CS3AG9T8jzPrm8#i9ix!XwҚM5Uri J.YxЬ*dpN K)Ւ0[ ]\ mJ:Q!w9ЏL0H5巗zхZܼX6 KmipX=0]1.9{ ssW `8T1b{kolLcIn(zHٌ{˯_&]11LkW1.ozb–^V7ϥWtYnKjܛv;op=Պx3%oj&ͦW5HN8BDQk%맧*`M= MкYJ9 tI I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$П7 =@0!' = h@?:t$g]jȢ38 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@BfLI vf  '.1 @$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 IMRSf$$vIeL$P잓@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 IIi}pRzb9]jJ{_\7ߥ\w?)C A)DMpZ7܃K2X.]BpC/a t%mKe{1yGb[dA^O&NCt̀)􃉰V_M~b(ZEmߑ)6񊖅S>|n/Qx;*>Yy͋_sfA} ? i?IOsLP|gc̗Y{aS+1٘^Xޱw3^qNr h>2g4CbnwʫV~r NӅfXgx(9ܖir\jW/ӛ0,h[ gm ng7CX M؈"khF>kQ-<ʟ3C/EH,퐤p{V"~ޤ{3&\X%gLҦ`Ef(!d[gtxb+hbq3˷>l~,L_U:N?{ɠRIh$ 1@;w#I(`#yF2J81{Ez<^ ,ӝ NW {3xt奏ƟeR iZ`6R14k9S W4gFPg E6l%8U9 {C1H(東%aV^>v1h`RtCLkQOg`c:FcMW\oGc~mU %Ȯ #㘖 +GCW@/-J&CW#^] Y=NWi8,CiL]#J1]}K#+h pc+BϞLWHWZo ppB"O;m4NW@rLWCWy7 >p'+ ?"Z;]V-.~4tEpvtǡ8)]]"]Y#+CW׎f:tE(#]]"]9Gv:c+Bl/BSc;RP)L8:9bm0Ǝ$#/^24_=U><9s:GSluTZ:GQ.19 Sl׏= 2 Ϟ=^|3Xv-x ݃OJ>$yW#d!Lѵ9l*oҪO'ZzxݴmT+G=&t C0M<9rM) ևL< JO2l;h/ ){ow)aqy-Zg1 ݃%ot^VbjA 1tY%Nݙknw2}܂Ӳ y(7s%L z?ZVejM*% e4]19ֶڔė(áqnkj]6}W(K ٭C}.~sջZzFݺF`~ݽnC_`n;o~dZr:ݛrG-SC-]ZwMAFh7wICmKcxgY;v> FZv1d}g~L5g%FEmw1Y1PIJ\OSRZL$[Th%X{6Y(Qjs2DѤ>E, VT"y"}>=f1L&ch!;5[hJ)%g\2TeuC*J9ΐh|k~1_OmkGP=1 PlvE}{ c%_IqdKbٲs $"yf8帗 V ieuzwV sAfH롼h'؎19ip+7B{(YϧL2O>(8N7|z9zgN8'4B*=}b!yY.4d)V7BK!ȹSD2XM2 02ʢNԘM.!$=B'u8tF_. lڞ2>6-^Mc]b[P5 Xٞo-Xeb}{Ķ 鴙́0f ]C{c^a>pģ!q=:M s{ |-"I ^._F^/Yfbnn&ow/Ny tJ3:dP2$X4zF"84)"XlE?jg~8 0vikdZZˑς Zڍֽ)//;ߺZM#XdG E (4PI,YFV#5&1`t&y|tz݅٪ *+0J{f1Ŗ; 5%ԀDh y** LZ<ߜ]H>~݇oV o+U*C$4-Dk@e(]dMd9 cZEF,* ]X ȌR$ AX8ЁF!G[;[~ݙ ۟<+ Q&TNc(ŊknA1JL7`00.걏.)9匂`Pi7%$%EuYxF>-2lNo@c.ևe(1נäRPe]k"KbQv(|6_k@cZf, %KSתdDКhBL1AiO3 -MTWӛztuԛ_ȠۨӘ bpDJ’P*5.3oc|Z~ 3`XD־ONW F;Lwkk V! Л63mߦu>ږI^I@WCPKJŬU*%!>8rAl Af12FdtBH²""ԥP,ƀ0KTA:[Ώ2H_k<+ϖJP5xV`)dM;&\$41kS޵n0j s1bs"$ 5LdT"bA;װ ޴Y<74WN@-tje sdҬ:%6ҘO@:&[*MzNFIw6RcN lCk_jIߛM?Ϳ^J <2BO4녏4x\_Gj`|=ǟ},߷=oxQǦv$RH1Uum"L'P6P{ҶWZNQCWT΂/?Ӭpw䌇"ujL@( feqQ SoVue(k/Mv=z䍁Ua#WBZ^NԎ4aJS;^-(8v+7i1 ZL1|N7ܣCJR֛[P #9-f;c5Иt45K8")!낽j؍jށVv~mC[Yo?mtqlL_I3g\PGӋ(aa߻(qTM؈1!&`EI` zw-u[8Q*{T)C edW Z+JbI$Q Ȩ% B4VV+m"(֦ZW⌱Y,B.*%?;/tQv|C7k79?"[?mu릆gͫD2YϑSCׁWj0.lK"jn H)m,$PE^^yPufGy6k^FpP[ rPJ0!$Pd6kf) whbm^T!fJH0G:i IU""uAm&^:wh:Y1bDL6*ebZYN Hd)͊Au Q *TZVZÿQ/ytFy|si_ZtW[XO"kx8Mb"mZzHt:Z}l[0I)},epAzQ0MM bǴ w4C[.lKE*uSe@*C]UH77Pd\r"}qUX vԬՂ k«{}}lvۜHNߺTY5QPM>pM+ؒzB>õ>螉>>/G\* r6Ol0%m[օf[^ҦsKs3VǎSfoFʍ4E$+,_>s`_r9898e_r5%%”ф"(mSp("0In{ |v{c$edwƕiM($ H5D4cZ~kCϩdXd˗B[+徎vFAIc>l9YW:Ps:+u Q2 x~|A@&΢'!Dd barKdB{_ْ 5Hc,NHhA՞P1Z~"Qtg Dʐzʳa'zl"iBQEw,!zm$I:S{P2:GW8:84rs.;s+|ޏ*1/RvRXQJ((16ƜI蹭;˄ڟ uF"e>wj4Q0r5i`mbmXBBeFLJxrBm9={t`>3 y.wF^\KH rkhD 8Y$&lbjMD=6wEM25A%Ul,I_ԆZxVI+UiκldJS9兆Ҫ(Qɵ4YEVGh[<]F4JŶMP ʲQš.jk32s׉-g?;q2>|g'BneqWQ{9?t^υx2ﶫ !!EH F5ʳ1֚H<TL{y=.ݴC=Jt1D]+ٽ+7[-N+5zqu33mυ&eZ zMרtcHMI5HvV^Jq/ոj$d8TQF %L, d cmCa=Pr`osBHluK;7L2oHMݑP98>"QFטr*D YHw}~7x8ib@?. %%"R40 +)l`Ph9`j{ȑ_iݭK@pw`73e.F<bw˖Ka$~d="U%n1IKJ}1^hf èțPyƯ{e؉||v8QqaH@Qƕ/62M)g EM!< Fc{ИP*D )@W&P-i A&{A!} rZ3!Kgv'.gd}lܥGʙ$܁%NPQ8ya'c=XD8Rj N@s_bԜThG}&Ñfa}5z[X#?P2-{2gA/AYBG~9~Uj-K*B d<9!AytjI;h|{%2^:T=ETETTkgLDTf'8,˒ZNj8jMq9LN'N,ij|L=??Mt1'KJ\0'8Uh@701X #,q[8Ӌt}O ƏY=Hl8t~q7KM뽛w)fQ8*Ϧ+V!ȵ5RaI׾N'񮛾]s֫)Z&?';^F٣Y|~an?> gpzc+m+\ϕVsy]E6gDuMmZj׶z'+\蠹^aLӞE~M =WzPdEaC [_gʐC[˾.g_ַ&<76c4fLq˓ѧ5YW&pp}G-TZQG)5y(r2SȪ 1Jkvu> :,33oy:d/NjӪ/ \ž~EqTJ JҔp] xXD+Z Ҵ+/8A?ůYb~2/?5~e<겷֍w]OV&~|q).}~VUV#e(k{qyi?n?$q㸌:}fk۱_ۻKաrm\ Gg|wuz3j{e? Lkw8ۮ}کv͟hf|kMh4%(Jі:YYJ _ߥs|3=%fNza繪ǡ1J9"Ʒ@m 1ijX{imZ1C <']T%DKc(pskp>b3jݎA:}t( i+``1&bNzLҜވŐN%Q"T*ʥ*)ki rcܙS;dFz^2JT(4s1 2Z!$4:J͔b&.:@sHݷ٭rl<AƪjrkPޗpa$i @Rty3}^!/h,pϨ҈s(RBrhaq!qʐz^pX1V R6!I͌yUZFƮ\ZB5pޏ/\;/.fƷX-W;&g_(x<}猝'JM( 2P Ijo(2\m %fƞ pg6T^Bж#$&Dm'o Ìbbvƹѱ+kY[=hFByBQ3aHxLj ;)TT8ՑkbRaRYﴠ(iY(yT5!/Yd(8"Gօ$i\lZ0VFҵ=cW+# 80F%ڢ! (jb<`@d)TxЎrZqb1@ԷʈFSBBMCFf( X@e=Z  :-3bkpf!/Ηv(ّh'^<^d^jXH*F(o 7$뉣V9Oqj-Cu ŮakcW>-!ݍ@aDkTs䃫sʍιBޏ@{?VS Ng9DU3тɐhie>}\: "^;vYu.NXǭ A䎨!D @eЊ3j ^'Z:ۮ, VՎFx:E P}%Zm+`$y7<&WANjζ>wN\F&| 8/>5s?]^fSiA_}!}UֹN,p4U$ LPRtG lb)#9mt2>"E"S"+eA 3BN(KA ah:.9B3P&.4jtJGqa'^gW_1kΡ"8 qAi+ ?D3w 1vOYk]u&g;IܪΡ6-B&rvmbZbD?U^Mڥ::\_.UN 7;2aSt'VtNQ?':/gip}}_ܸoyJ7-qrQ!#ESB%g} ε ke:MWG݆_]/#-"3h+vkIhSlnf9Z3k bb ݼiu0Rԕ6WU,;jys>d^)>x/g6Q ,{'ڍ q^^+<+e_sף*]eB2\iBW%tQR5ӒJʀ{DW}Vw^Pꁮ6 D s"zCW.U}UF f+0T(#BGUD_*5+D(JPҮJt2htQʁ\i  UF+;vQ*9ҕF]XB/n;5C&gśǍi~Ѽq`%SOQ; _.59=x*?0Fin~h\({1_u<>eQc KŎ\L'K0g|1w<%Vp%淐y(~hdrB~W) 9_M# ߩ<ׯL"%ܕ`, ̾6̬ ڧ XhNt](M6apczDWd3\ݛ0D+h2J6}?tz8&D=t`/LW[5j; ]mRtli+j׮j{DWѽ+k ]e\u2J] ]1 J{CW}VɮUF:DT=+,h hAw2JI:@F=+ ]e2ZɺNW2] ] &,To*ÕѮ2ZͻNWј$SRi+*}C'ۡUF)@WHW*KzqSYy)Za8e뽸xoN(f+Mg3ZьR IBk/~dc) b?|^fe]g8'( JP\Xd%pJ^kfE4 xibϧl#˯\ y< S^̐#py:ou.C Ť0SPkCvy(R-,-#T'tG!3~Z/ĕzy^~^wgriCbitTY2)1iwhZs+@)PYҕfҾ!)Cf^ס8ۙd}@OJ)nE('u6Z5U955h A՜yR!\&J+D++D́Uo.ڞ~ N:uSuBkɉFº;ӁLOW/z BZCWטR 2"r+DIUOWgHWK%K+i9tpO}T7JNWҰΐ8Z *+ˊq^]!J٫s+! \b JV ]!ZMs+D٫+)%ڥ#R^ ]!Z}Qz·VJ E(-.缞=]!]i-󀖦0>DB){hPnleA4%CWRhj;M#J۫ʳiix@s?w[ -堬N9'e\Ԕ@j/ *lNtS:U6O ;o1]xr,bfs"Zs]e{9jﻜbDWX3b"V Q "V Rk}Qnp͉'wBkJtҪWP R]!`+.-䲧3+5EȂ R ]5pm)th)ɞ%+? BCWBD)thϝ%=]!] eʢ+N\NJ+D++DTOWgHWR3EAt9-'vp)D՛Е BUBNWR LOI>> vQv%pw.PniYbF<~vt0Se ̅)FM \KQd ֑85a5DWr^ ]!\J+D+ut( ak*+ˋōhe Qp䰪!ɧEvȉC]vZdG'g%k 2mOW/zj4=1V ]!\AJ+DHt( Y•- B'ޓ#32Zzzsc +,*T=޴Q|JXE.peĮYҕVBXqS ]!\YL *;]!Jݫs+M BB,{gQr9GNNdE/Av'ե;c D)'t+mm?9 hTNiI+$Wg1'tn_ԩsMR]f]YȜsƨktc"9É</y_6Qzg^`+lSڊ36kt}u)wM/nx4Z\ M7/R{g5 s.mޟgۘ2`'ܬω:IJDCeYp[-dl.rkz]:KmmjM%DT˄k|qD?ߥY~T7n֙}x>p6WnG80ZSHb%LpFT:::C-Os6ot w3`ˍLS = Ocl?L[||; x{4}d{xM,gSЅmOn?owq7ox!?_0M{߱{NV)B$j5u:˝0udyy48eSiv=|jk462&1$VXDTcHud\1|K%6VPhsMF}̕ouͿ2$dSh7?m] :T&y;]gaAasvΩP&Woo~nD0Si:R+FZQM9֫Q(8_r'3~:Oϲ9l=gDAE>q*?5#!ez Wn)T;5hg/o΍Dxl cFрsR(5%e:Q3x N)wUNvh v+͵76#w3trtV bf:AU`nYlj|OzntR<2ɧRցYOw0ȽSh DW8tkgbMF7 TM1uL?{3n=G~l2Vzx>|5%瀜[)l:ݓXfl]W2pa_xG88'TWl0!Dam|yb ϗ[NgeOK[J,#|/,(UQz84ᣱ&B>1 ڨ<50SDu]~e9OuaPw5O ?>]IO@15jb04|ه_N~L?Mv jvۊGJV[׎|7);{Y__6g˶2mcP-Q"nt!]i/N$nzI紨~c{(I9pVUźbZWн:tԓy\Yur*WYϪf׋+]v7@ >3>M/gxtZ"դsQ$J$iH9mJ&U@<͌x♁vGZi9fŐCx4i;P5u$V5rm9%q |E)a3؄QUrmZk=;Y P5rTg_7̜??|ﮗ&Gߡ7z"I08|`ng3k;Ny'Qlrfiڢz9ҁ1Ȧc`ds'某'66J{ѰHjȜEv'p\tJgWbk[Zs5IF-64JcҥsB\ Yu9weI97Ⱥ!'Կ~j^~Yub ߔ" nG~\J=6\{.gs?xNƙ܃#>KHs1_\viybP(9mis!pi9مAAQsV7R`!'Cz{tqq-3qv߾0;:?osT *hb!}t,qcM^%N I*Ԛ[R+C׉yyzGf#9O@ a1vԕn檪9ܣeZˠV}jI;Oz&C$c2zu9,HO4/z!mRZXı$q\?y*3|?ߍS0'KhS6?B{rll5cnonuwD`]>[" Fȕ,6) 0ex ! )/o{x `éO!:yC#/g :MKǗ|8H {oWOMԋSevO60_s13o/~Wkɂك&k[7;6wplmͥ7?սd߼] /ߞ_, 19?/0m G_~짩*}=>Yb("Wo{%q샷w>>I_ƧyCWW󃽃+ߩS;D  6  qÛV޾0>`Y6i:üW@?xi϶;6SDW#׹#'sɐj1сb3-Rt_H]|"yHQsĽS ):?>% ~K:{8Y8?„;Vͧ_ſ7O/I>TQHZYRqRHkf '9WSMeJl7GfsT FTq#%$XpJGpת#9x5M]ɤ+WC=aC@GÆlZ䐕WB?'|\Zk5 M|Rpåis","c\rdmXXt>@ [#Ҋۊ g@4Z]u(2V2Jc)cG @QڧBr@>99`-ĩs4lbFż dmTzc<*\J| Z"D_86k#z})cGȺ4Lą%DCg^G^La4gi ԑc)\ V#T4qFQZ հQ#*޼:x' SzD~8w!,f<}(ϣɻ0RP8vpXז,dw%MRɀtRb *0N8HBK yX _XPH@z+(|"Q*H&jZ#TPe9@afmc)cy!lW6ֹ(w2$`ybǷn3./ESU?~^%t"u yn&[^+I|ӷ `|;NMy d8K^`%D߼ \k=P P.}} >f<$j%.p E6 9"@8ۂAʀvo1_Z%VvXMbh;VF/) 0ȼ)Jy=mcQm\gii6ܼQZRUd5$bs1.$0"J8\ob,G0Ĺ>~V 2vU[@kVY'tX?.]Y,g\kaҢ 02RQh;Yz ~&q%67u3f8cEGHm>dpU@wm *>̤go&v0UkIJJOHױX+rKAh#yT޸1˜#>%bU:85i#^k6Pd]`tib`;bԋa zȕs!w|~Z)_or|d83`b&`9PJX↝t>pH`N,iZR#rUt]*:MDX\F@SuE`$9$vŊUC$Ti\r,ຉӕ&)##MMuFul8؋[ޘٰE:Xǁփh;<"cٯ A)?G폽sX䁍hg|\vVdg5 k򳑝UQ6I|겳2'eg:;1sO~Ҵ}%9Px.C([Wn9(9TtA}=씐&xPw rrbi-k;V D5e7G{W QmR׊3̹{9YjJ(gޝ!/j/';;>_SVxA!P~]wq$GjG:L}Ϟia>|Km~ w}5GY I/C`ON 8OP'=7Cܴ/o7Q`.@V2UM dpp_K1&}-V6I_Ո:%*IѼ5V;0tZ|T1@٥R`{!0Zf,RzBڼ[/p |sVjnp~E\.o|="Co$D)<"O.<"O.<"O.<"O.<"O.<"O.<"O.<"O.<"O.<"O.<"O.<"O^I?9~}[ةoeY|v[82ȃʦ3ޏdMN*#~+(Ye=+~|DvQλ gvHqIy`fy`wޗWmTl>ŢDIJJE+ ÖXɪȈ̈s&ÛNǷlötS't|Blb>f "E/ar4x2ٕx69j;yJi2L033۪\Wgdr?;dz<]wǎ־gwmݖPeN n?wS|^mNw 7]&^8np;mubm-ĚؐW +aL(A Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ Ę@ ĘxJ=`L Hkl1&@%$ Ę1&vp@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ v;`:&:}f,?¯:FQF:fRjL7>4+S_doQQP<|.ףt [HV~Ϡ>9 Pg!&{B n((D~D)HY}9DrCSR~v sSǫbfʫÖf0LԬ=|z_9Yv/H7_$AyU\yo,@EP1DɊRƖK 6"̗#TE;5c0ފcp{I,p+cT V' YC6ɲ0)LƪFNEPMaGwNW "pT/t4KGnjt~F74 c~G霂:D'QB*;h1;*Qgt1EQdKml1x!K:m`XtOwe|{J_07hQӋM\ۗDggBn0\l\k\TTYJOK9c;=\9] Ej[a˓4ͫT)Y̯ \wo04sg6^d`A*_F mᦡ4 AHX3ΘO?_hx9ʳU2hJ?|X9vk(\^t_o&w&XºbpwfsH/+H :rn8< qAlOZO娋YϏn!<>?w)eN7AܣE;'y80[%W-"uc/7=L7<)|KBu7H%K[ytWM VgοQ ]oQf ݌;c s51%ɾt_7\hwlOdݩಔ!jqξk8ݓ[̜lnxp#mky^D)WFZ8ĕ^@=&fm LS%VgI}گ!NڡJhހ>?qxXv eye^m~4k zboɞl 4i6H,tX0ֳ71"͊%w^W40ffW~kS.zWӍ{dEh:>]A ({+W]]ROݤ9?:Y;~};uݶBuZ? n[Oگk.uЦi83mA`~D GÞJ{DRqdO| 쉏^5JWx m2, QGu. JQf \TҠ\|j3A2nbbiP׵gTd;#2Lk  @ ;a`zV5s=k9E~En_(&nWEݸFBkD)-F e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=R#e=RֿG^iTJM뻂CJq t<~cBzZ$w ZޣEGh ZB`&"/2˶/;>ⳔN-1F2;c]fWG3;Uq"ahX;p0$ Yߐ08^RN@8ԄRrALΚ'4pJ< )JDWQR_뽚8;Nlt}e{wW26 Uri9O&QYXU4Ǡ7̓f֨:hn鼤Q4)dn 㫯':M^\I6ӇNcXyʭZxB׀}?>~j{($iHr)8,-#Tᱬ9AﭪgQJ^3VggT4c__+B4Q?lBO;: ?Lυ(;L 4a"Z) E.YT JўHpQ"yʈu) @CI\d{8+n+Jg٨3v̄Mª\v6=v5q{l?ť7$c_m*{m^#M/XvYDVIrL:x%Cn KYWY*7âԮDbUB> >p>fY:Iڹ8a/(cGzD[#Z7`Q|H_ClNd Y<dž(+kKG^#VgG|U{9j:iɞ~~q_@o/ ,4 L+M-muEc.ޣ_<_?CZ\ys;Gn] g)ZT)yڸ3//5Җa!DVS~WgF]f}\c)umITG7kG7͢~jzԦST%E)ny 殯 >}=Jyn Y05%mW1"3ЯJ>O=d,n/z*equߙO֬`X:TM2oʁ ^3ԂAh4m-7)Z.+Q5&U1&&Ύ7Q(JyJgdY8“'!i[^Yʚ2T.E;55L:% VbQUASI2&hOb2\R]צh*T٥I4[a@c]2]JUvR#`ˬUK00%:2u$XxQ,uuP_jjک0Nl^8]X՞0bRX3GhX13l`yOl "MŶ}O(_Dqcj )ILbyBv:MpFͬ'9E{(Kp4- |>L=GcJ]"Ssx )enXRJK̂ Rtr-dtF1O7]EtbϮ]ED_@|ǵ{>n<^Wߑ,&Ӈ4v^|+lp]MDn3'F&)[Z;|,17υ;_}G!LFfi&QEH3$ 0EJI`%aIP"Q\Y:%6h୍c!TCxkCԧ~Q#}^vV~4 - \v/Y4?j5@kGjvil1:zHsxVfѥOQr2A}ܒl%)41z>P5l5:nI >gQ;Kwp6y) X8jM#G\ {[Rn9KsrU{.mKӗZR/jq{9D)c3(Lp+Rc'*z5x'9ru~g~3fҦSWJaVR:D2ĥum&-l4)-}-Ef}_=#vDlѸ+r,HHU3tWo]qpVUXqWE\+]H57Z%gG㮊H,7dKF}D x"⮊kW $tW #rWJjѸ+)~, H)XtWo]ij 9dl'q)⮊xy*R.䢻z;)jtb KapSk?8_*h6w8vab@gD߿LƳO^NPxv T(ʌ-w3lx(i}0u_nKs\SPyK^xš;a+OL !~'݌] 7@_5:PAfZ8_ASR%@&M꺒뫊|rDX@ mY) kS$;;ۻ3}Lt"@ yݛ\^ҷ-~ѿ?FRٳ_߼^+2Rt0l9\bE`{:+a$`'3WSG&+&Z&^pIՐR\qJc"eزV{vz=<8i]=NJ% #Jtpks9'WE`cNͮ%8totpJHi:!""WEZ}pU4WJ:O t"=vEҚCcWEJѱWqR \q9*:br*R^"\iڜ l鰫".?cH{+EJ%:zpe8F4]gLr#[9&NiOgʮkN.zϰQH`g9izhRAU0@?6 ]ahue,x"JI+g@UZP{uSmxoN3?T͛ʁzw{^zVct0{5k6h5\2, WʁʃwN |P›d@#c j= zTFZ_.N)qR0+]CDB-Q9hHI$0N4=2,49lG'OL0![G*Li`Ri ,Q OAA>ytҦzP:f l/gr30*t~p>NW3?}>[TrDy}96 t̅4F0 [6W`2,c.s sAҗ&B;`0`@=D#uJjcԮgHndfA@Ô)~xl8SRIE˦2 I'rJx n<b"DК8[wp?5V2&pTwm5vPm_/lx]i24ʁG:"*P!F FѹA;6̓dYר xdv;Y =|mcڄDrRBR&-#Rb[Y@pLGo:!O+S:a&F3#k}vwq "b\EXZ5Sd}JVI DOLvZߚLwJFYoS5:-h82P"Em[vTG_jċ7NTqxf_|:ЄSTeB!Dc } -Sk\u0xW,` }vJ h /y}(F&8[wW _W,swD΋fo8.{߻2!l䏫 \Ph-E/~`W15zGVc UVqkiū k7oSѯaE5̯k//m먜zᦚQ[W8x77v4/3BR&&=ƶ+㧹{횯yUQv.n_mJ +QWyR%=G`h7wt)+?Of?7Oc_g0ro3?^9zԇ\^Ib&1V٦v]L *o4EjOm-rS1;fÑSf]iqrm(Kko/}YO۸2Zo N/k<79$ &J/hl8̒p1lIy?"5e`6=L7)κ$jҕNF]pOu[MD_ɆԔz{/픷鼛i]E1qÍF}súGv'\VL"Dsήs8͛[D7=.!ɾ>ߡZpx3}(coegd1q rKkMP#c+l5dCmKȬ7&sTܦmL1I$ZC~WMrO[k?vһqkқm9};HdʲGCy˼+1m78> duX݄[E}ӫeOkgߌ7Ie}i8KqQRʬUҫl\E#hvwԝ7Ȭ[o-<",3m1J3M jϾBE*HT!Ea6&>> ;z;Ht(]dYVԚ|W^h:%8SMT6Bfo5K"cadSzUJm T"V:{L'cN#rX(P 2 mMIh#ݝL2$" T4,7+5\JiMET_/>XbN/ﯨ ˊ3c2dsGi.(!2ȑ{o`]>ink{̵[h`<+㤎)2fΈaŕ1&Axz,4B UoDVѨ]6G:*fMX8INgzWf9 8>.mZri!Y2.X54x!ѯNT믥7@N?s/;$*5؂`@dZ:C09gc"8Q֨V ;;{K| EZ,i)53EF*k+!je@G+qTK{VgXc?~o!"J?"fy>gH"YadD4q3V1xfuތv5IU%EvU>} B=)ebJg9ɀ3w2 EB̍{LJbǒUa|XvüIЫiX/+"U]d(rZHCBlL&Bi6Gɟ66Ϡd⾲Lۊ@NV^ 4gЧ vrgƊN3h/w]ڋ(sVp4B\0 B,8T!LH Qv$jǓ0qRi/"#j18`盰7?<[T.Z8d5RL̚[a : @apr&Js 0v& {tLijcbDjz`Z^[3mg<}#-7`b[iCz/iR.ZjTJX++WX^묚}TV~jF?UcOΫ Z%r8OSĎxZ,LvTHp!%`͏eoq.+Z)m]{Cd嶉Qkoŏ~I$ӑX<Qq:Fa@w79G.~o%+\ U0 Sٶ_Hh GO(ROCLJnqX .j 5N3kP9Rt6k։v r6XJ/RV{CbW&6 's<-2Jp2KꔒdtYW&tgȾD6v $ Ų8DG! hT9}Za gc:gyOx?56EmTn+:,KB8 5;,R^/sYK#xK "l<']|SRͫipUW1C6|O`k.wcRo pC~:KZan< f6$H/j4$YKϧ@Bg@zKSm,MWGqe["\ klsihS-і%/_tnz4Рs{W{͙ëwZ]֧Kh:>ݠwP}E^dGOGq|L+9˓^8MMMNy֞[x}M$ΊEd aRIA{M|Y-D6ySV{gڲe@ˁk@M7ѕ2f Vރd9'|l͆s_fXo"?^;6SjygWdɽmB;}k޲[U>:Pֲ$G:Qd}MgknCQQI }z-_T7OEKŦ'G}̈́#rbdAJjm gelUfq-lh O|"-͗'<~E\U>> py9ty-0F2Jp l#4X7!( ($ jK$OC BRB)E d&W8̀\Dt^Voi6MwBtl%Zmjjv'r2.KUWF[%U6(1{\$kb*Z=GvgdM%訳c[VeZmjAuvfo~S `H[DZ-hwӈE8E1[^`BXU`qlL`yEY@m#GRil@KF&N@UԔ8l K|I+p:8ɊH[f9F]WMlJl&%GE-F8mTTƣ$+7tZ[2DZ.܏vvq(tkec{('0a8[Z}^\<-_Vپb3vXBՏ`{ޕ8X0&:M 4wtHIv*oG tMXxzG?p%78%? %?zG@A:` &rm[t M NQKғMO%i"F3"IiRc[F.t+<$R#Wi?\1۝`gНӻ,~vMچ5פω0)2zu}$M bFBxA{hWgƇuKU4w09kb ?^-}3RɤOe2%ecqF { qwdSmAYk%BINIg+sI:5^i.i W/zP!tygg^r{[Ω-@YBUPk֎MIf;7TLp-nYuyzVPMgS۲@UQ~҇-c2+ %6]X5X66cx=; %Y7|>?F>1i7km`+ɩ#DVȊVɕCX:۶Qw8'CR)4?W(1BݯIkmʠcGɒP%F͆s``YGͳ36z5/l)Rs9;?Zwξa!q`K(5ǁl2% kbJPXGyߪ$b~Y`^S. jr6+.W+KP<ǭnY6UQ  T]Ӽ$q^CQY)!*;[# )b|暢 "JҙQ_FqrMm$HS!.DǑP"lGŞ#K- VpTgWw947%wL_m{ljfK-N"$xHWv~|25W&xk kCLgSN]I٢5FEE{1TvY!p2L Z=(P\dsxO^\yh_u1ץvdaCG xj,L|dI;k GM'-< Na9:W$'MXED#P`HlVĶYR5u,Jf!,يIxАtRyjj8l#y`NݮGXcחTí%OЏ8}PmC_]Kcs # ZWݗ_-[ܡ RN9Q}F7j7jbLL>]\L{/R_WɃ_'_==æW9/AA-i6ܧthn{++Yڍ=z'D첃t%,$TTP2Q+^ F΀6S 9~ܒrq~琒~<3iC aF,.|1XfؠZqeu@`;OyƔ )I%pdvdLeS#$z—c엂u hm/Omk![p} t(&L2t)#s(5;]l[,{l!y=|.O63ſ\S\ΪHOtrU'}BO$?p-w۫_>M,Zױc;xR!>6Him;v06Er0oMBl8ssl{zH bQǰ wBgp>! ALc Ĭou2Cg:3#<7?UfQ/O|aכSq=??mx4c~sm~3OFണ]'6J};5^#BS&#^zyo8%{}To̶r{LCW|ۯ]O-'y3-Xm~]^oV{<>3Y}T~ENA?f>],[~O?eYkT\⟄x+eӱڅ_o`p41`Sw7b|YuN_9;:m"I|cꈅNjl53?ca84ǿǶhȿNVӲM|`5,SǞjzS1tQq Uȑ^/W0>,櫺:K}~Y}MVK0on뛤>o­"~tcu^9EoBu{6Y|uW߿^-}{bM7᩻ݳߐu_okzoJ=p;Wl~볻 n!20[0oO\p[ و/=g jo5Ϲ80^Uk^haY)ixo1ζo8fkC8t.{nBSUmUm9pb FhH, 2V6l't =z{tbwm;( (-(: u0uu.3-c4,tdec&2;BEA}D*qLmKmlm 'ڂ(LJ2))<W9  XR*DeR/qѧ6-15佌B@DK:!dN}#Ԟ]%_,OGz[6*lʒφM%n\0P8H;ZߛjO!+N "aTS(˘cs Oi!%)mIs ` ef'WRK1S )TV<¿F}ww}W j& C'$ncەs-sĜAO)])OEmoSrX_]h@ԡ3EU036@} k`]wGӕjҧTfJ^R^r& A@aHS;ƍdB/r/aAld'@(2$UlslQKev MQ_}鏆PʦLëOja'(kILO[c dtVu&骍F1MIW̹"+ĕD+t6骅$+v3EEWH{h8\WHDU u3T=5S..fS j^8@TWټ[Ru^Tͪ{Ն۫^b?hPgeYy3 ;`b~9Ӈ7~._'ݶC_p[݄x9s9wm߲;?| :2WYg':| C$mpMK8 Pr7TyF*(׿*hyP ηG+UGf?𾞏K8iZbУJ&. Bd~]<2خ`{̾m% N9ju7{rGN*/{kۗw۝v}t Owgh$k+4d ikiPRj&k )hD`Om ,r]!d )S۲2V9. -{ud+=p]Ijjc6e_סWcymZ5dmUbG r2{Z6ϖCp5'P9LÅcoo?j9TdUgt8}X NbwHW]1MCg d5ܽx߱dJUud0 6[_^O j7c݋g;M_Z >zo#&%di=@JF-1p3! #+ĵE+RۧNxR,Ш%$cdtV6ԳF#ˮTTվUϝcVs q+,z]!IW-ԕ*9!]0*BZ)bR*tB]I4tFz2cR쪕R dt')*9'XEO:.WʊNʭ9lR^Obu|+F>?3C :9rXF8fUZDBs="f:>KF*zbP(1;(NTڶt)z]r;-sHuڻTGϷuJe/]䘱,k8+6{"m6L]Lly敠4=k+7@J^=]^w'-} 650\x Z+zF"ӕЕNڷ9>*0 q+5 )Ljw*QA2B\A&Bڦ(`IW-ԕ\J`dWk5]I\NĮ+mґt])`+]!I9HWHkYBJZ+-`Ã0 5ğ]!%O]mԕʮz2}W@˛ZE/R`ue:WFT0 Tp;m*Pl)di֒גy!MDYXLnWcNJ8*>M?az:" qTEZ}W RTmSK4A_7 Wi ~R4X;0`qA7>V4]콽 ЕIڷzB`' q^4.u%]PWBi!]!pヸp imR]QWRY) }߅ZCEWR9&eBJ哮Z+/Y(]t+ĕVuF&]PWZk@ `/]!"]!m}WHxU ue[IFWdWHM23Z++,9'e\Iz4['[{\|ۘ@JMfiGeX\;޹#:KS'Hn#TːϧZ&?K0g)sC͆MqFTm64ixTl@{L qDڦt&骅XIIW]!TtǮ+46XˌWaO ռhEC -FllվUϭT,_A< W*BZbPZt] kY[{CFW]!б )c[8$1 rdtQ:қJ9)#XJ; j2B\/ h )c.$8A+FGW+dWHԢaZ&]PWy!]㊌WɮVEՎQtu]YD-Yij_UfIUЮfе bܟ8*'[C3qaHO&ҤamL&T27Ic%U>z]/9K]5mԕ^)JY\֕ *R%]=]Zq]{tvk% ]!`ĭ콽 ЕKڷSjB`0\ i-]WHIW'ѕdE$o'+ĕD+mJ1"+ֆn iUA4>骅\8 OCc qdWHeBJڨ+)?+EFWdWHkϮҧ`ue{:\e h )Uj RWyvRly8f9?. f9cW/w>;E=>:-{vXQ0M&PȪ-ϫG?^sJ^/ Q;fQ9;1owl}NxǶ#əOgϾ_v2qcYqs\B+:{HTqѣjw lG6bT?bX:XRN:mTpᖅuK^p)$5琸O olJ4DY SY\{)7r2\A:N*X$b1ċ_;WW_};[,Y!G]p+ƊΙ;6s]3 ȱ7Đ^az*}U~Umu0˯.c܈M*z*C;646эIp9ܭ6MʼnźnU.eWoۚ룶vr1@Qr(~cKVDdm1+uN,4Bp-fBC[(0'9p;ɉqS9ʫ~:tQ4B;>D62nSS-+!jp`1أҪ!Ŋp2#v-݅`P[kVdUp\ Zqpb\ߙ؎|c<:v=ۏ[ ?ࠛc #wSB;Jn8)*jmXۘsv5]P_#Wy䚪 };+uvM59//FH uXr2[\EYORONK=eGB?wo%+9j:44y,%YCXC&z*yUԕᾶv V#"9/BUӵvS ΥЕ%c[z{BrH@ ^.|;j&AqzET C}^,W~>㏹>e_P\trǜћZKĮ_2^ d\p هi^?1JG2/p1U혀{@rKWJm +CsE gֹ͏L00cbHC 9'+bHrc;RfV!߲Ǔw}}ćz9?;߽Ev|yN/{WF C/ވpCݘٙ؞yX*hPDP<r-+H|c ;U'9EJSZyAJRbT/TeY1D!< Jӊ%pZZvM` ۛ^!9yW!qrs< x8 K:S<قق-u0`≥3yX_]#ftÞwQSٜwwI{1N~'qR8(Ljih}Y#{Yjf2уƂYԔ5Y:#FS|JSyVɔUQL)e^yFL . 1Keb> jpʄ뷜~2;Aļer%/彾8SkUL̫jDRkl^6w\B%Li2I+Rey&W°*E!|K=rz68"T=-$ wɉ/:vnfnq:Lc|ŃYV4ɇ!fu2xrFz&Nzɀ YBJp1M !?fe:BM7Yjyu 6ѡnW! :V=F3|g|NEYԔ@q)j%ink퉞^6}n[S^BBh/CѢ^U;17%!T'E ,XEb2G%X49T!*eHE.\&E%lI4j#g囫+m~7ըNzwIk6{? kth*EipaLDXC9ʘ  ;S5Qzl&oIZjӸ#wQS 9ӮN(v$p#CLĮX 9[1+y\gJ D^J2ۋ txۇQy8JFlwxVC + 74ƃвfsX.B7]P֌E݄L`dc @V7t![?Mh٬fazH8Oc@*s ^esbhe54ε?lጓpSgbY uM:fk!M[tGfpڬ)t#:I {YVúbh c-a]|0%w2.5rƬN Y-k( EY$+podjGnhYCٵ:`_Ajfis %W"HHY(-S,CPj2E3Y4Lfs h@˂5cCvPID 7Q֖}AI.'f8_/Phٌyt=w$+o˲F1P.ߟϴñ@~'뻆pÎ; G -a]-A!e`͌dCg4IMs\tSb0ȲXt:x컩D9ip@g,K@K0#3ajdYXs ɐw vlƬ.beV *@dPvJ$`!0eYd`99~nfFҋjn>b&~O㖊G0|~o/èZ̧yYa2 W7u ñ2yz/V^,FDxLfb>_@:]BOe9kWhLMb]Rď4ݲ~"H_hc F5J2C?’YO]=< HRd T^*ߎHRy J2OW~},+o_x~/B=+x("1xi*k7f|c"K˄AOsH"A",99KqfH3rCqp@wrTE˘(%W,aӋga# |Q*6ibFWZq4hZU qxs6a_OZ6BS|s=X={.>̿T5Zo6֮J 7*k7#·U.+jVSKlÇ'psR"yr!ΞV_G9/-^57۟*3Ee.ڽL>p2V13<Ht9`cn+f'_e| !y fODO-wrFoa2u'̑l MΤNYv:y!ys2g> İȑf@m9ax6Jg>Ъ(dF9 _?yoFhlSFh\g^O8cɲ3_<2 B`>]GD fU"4p,}'HF[j2 ޢbcG|)= ,wh߇L$@/nK!Y3=Y7d"hZ}Y#aXCs(RK)(ȶ~&},ƫz5YJ߼߯I -RP{[m% Le3Ph!Jq(CҌ"yJ(f~/~{0te0bt^{ LxGnNdcDw/&wᗸS$3A=vt#EYW{q% kq ˎO6[6Bܻio,L[Pq9Lmh@Z!mZ 8>~~K\@>4P_xR`X|phUŃ;mFh/eLY,A|Zx+ƚL3'P)yLfV^|pZMiLQ^q1Dc5o%P.TjImXBLB3Nw_~}$Cxg MϼB΋CQ`y&K1㩖. D& v2׵^nrq[LaJ~$#sMؤ앂cSr9'@5l\Itfe>NV >?{9.ƵuHJE0'E{w!!5|C\Z7\ &xJB8o[S&:~m ̰B[[dhΖ!/ׄ* uup巛ț1j)]o4Zɂo0ܵmȅ\.f3̡nXqFepN -*=)H w#I7f.fH״I1([JYd#;0l"< ))M,`JXJ߿f'9'G}^IdH[pP"_ aWt5!Z/*'iGSa&9'/ݘ(( ;evMeX~3J $YP$-sd1QҌCo:g@5gdRډ->:Gvp<=QOӳ |NQEET*K2(QA* T&g9W yIp֖c8΅Vr|\PV_knᾋ  eBPRTiS#~#ק6%g񬓒x<PzXqiɫPnLkmG O;eHb3o rN|N߷ljb[1mRcѧcE?XU IL)1Py.q7XE?s50 n;GU`Ԫ;n[qZܠ9b8$nzm8-T$7\,@cZhy4Ȧ0r윃kF?'Eh} h[>Rt(ŝAQUsHSZBSl!#l f ):F )||%Z) e]I2ڡ^ܸJ|mytNj8 c'' 4qx;cb#1.a]ۖ,=Bv A=,NvVR 4ɍ*+w ;G-E%N\C-dSx<]Jf>{( ǣaKCX^c-7pC{.z/a N7t5g\)e;yhs} g,&R݆"A@5q&p `Ljql>\0boB뀜kci+dW8SƸ䔏NoWmËymƀAEXf~`kW0F勞ϻ4WW̙-瞻d哼ߨA c[2mk3Kעd// /7. u< B1sATٳJ2~f:\Ƹ8N淫6ַ<RG8'\pt M1J=)) Y,F1MfpD1ވ:^ |ZJqXA`L&BM*[nݾZԨNyr>5 Eӿ$1(N=Nkkg-yt\&c]P36 d$]A~4$ce5U,JW<ΪpR=8YØ1v< ?n)8ߵn#GzܕuW`Xsެ£f`ӈJP9Keh~J^y2k 4V8m\CBy@!5G7[45KnFO&N S#mv)B.(Zw{b!fhcۮ) XJRl[LA/ p Oɚj*YUQ4}APd.fe.Y[j/aL*0|g?Hw/vd}hJAyOf 9dmU}H1d`ׯՊJbp Ù( N1"BC$wUedKKj,L*g|.b14q*9}ۉkĂҘJc=sc !8(?aT ځP( ?MD~ j};eMs;2PLv`(aurh":|T dq~e2"D5Ƙ7峗,}e҂,pDo)8sE2;ɒEJ|2]q]z(kF2ߣUΡu֪GIɷ}'sJr3`2N kQo{+|&D Se1s#[<7BRfBmzExjU BHihG7F%:95+h=@@Xju;RΓd# ^w7Bw_e^wl( ZKA=QPXY18^hzoTR6}JJ~̝;EYVi0  Wl_T4ȎvS%-IAPs=j L(N(xCOȡȬ)Kg2Rp8%1DRh-yL!MգYU"V@vؼ'tƤȖ0ٛ:i΁v:i>ULҖXϤdI/Q2)3.H^:i]]^6 ,V:{ףŭ$m'MV/wWF%],}x9ɷhw| ɶB=i끈`Zѓ t𮢯=jk#fD3dqűV:!t_ܔBv,)mj{z-Ͽ8։iC(9Jh%L*3!*r{[x c+S/th)\$ 1\QZxfلu7W5h!N;5Bֶ0q(%" wo>.ƽN: CxC/-YJP.ݦ}{n{g@2_v+i8eLY7K|9;EеLߛbB\j(bɷ#ZdHFa[h,^sz,> $RIDl=:i2>*:*i5 VyXz- Z}T-8|C3zk]a&zE3ZPX#!9.9&aeLznضr_.B,y7t*e|d5Ld7 <pli34eZ' Y,F(]yFSsu%Aq~7xa#o?^IjQ~/{|oQBrBjϲu\ۢYA?G~8}]sY7#Զ{g%D[<٭Z v}=UO(nu]na&uAOV<[RE{pUu7b%LB3#cZ$PۏyiAy"__[*Jp7,xz-Y M(d> ʫW+9kI YU2ŭ#.}Ew 43)؇DWjZͮ׫ٍBQZZYmT!Ox:(we~:4Vkh R/XpF~=:/~"VHuVTo=|^Ar?^3V;WKU ʣѵQoL&5X~ ƹe(e pd liqs7Wi~$QrAԋsUk/ R ؽ -yDɜ|ԺM~:zyE։th 5wr^qi"MM0yˤO(+ȃ>ŕ2qrg~(m0K6QèS@Cc=ԝOfE5v1-7|QͬqqւIÙ8^&$6T_t5nz[x<퍡鍊Ɓ1ջ!t`5Wy%xId\F>hNcM5bAYiLe>sJͫzTJzОݑ!f{<ǽ*@WljȞ{4}m"!VYQk DO'z`LnyC{-S6Q6 Z!x {2TxrT=h(0%F/>eE4k fm_ O84 +yY7vdm<76! ڀM&LFUK 6EPϘL-cGA[m64Wjk63,3TsM >2XiuTK6X)`Z%Ҁ_,*-+DsF=O0" uYĥ0\1p'6wFAjL߾*3t/1FMKcBVj48Bn2saLf bv,zѧ A)%?E:@u3r:~o+n7UiQQTŕ)> wu4%QO>zu-R.U,N*s  -i"YDI㵖f>2jjG9DȠi/峗: uF%%xKi:ǃamKX!d` QV 2Sm`?tE?DSU-cޙ)i9eYFi'bMg@T7 DZXxcC5Yo& 5;܅ܚ_ۛ!Ұ君]V3A9u^s#|Co+>1u_2`>>mi"ʨ݁t$OpӨu6uE;,( ܗ(L;f Ԟa2;V:+b&'i2*:6pWi컅܃z7i99u;DAVB Y!R" ]UfRP-*K3=xj Yf?Q(ổ-Um?yrU xɝkb3O 8g3I9zIl6q8i/Hn.FӍc1g&Sp^탻٧@ucru~-,-TMՂJ T<ޕ49[鿢=ľTvU|qI SچR>$%(.RRrЩVA{i[ؠW@'\yG*u O֟P5\_Gt{Irr",}Ѕ ,ݩv}oWك|_%O>-Owc 'ųoL`L}Ag_h¶RW[c'#.:x/jé}⃭ Z -d̰xr-Eb7mw.s:vLV< H 768$KT7e8jO>j%v@XBE';t}yR ɺ$>MV&Sc*,#6}b$$JBYL>}8|AUL1<0!2NYBIQZh``^s]ǮI_ig%-$u I2`!Q&YxQɲ6fs}W˗HLpv\z=$ޞAy(3PSM:MP|->,!jغ MqLhIzѰv?'wgGEf7- vk(3f1 菊Irg%ŧ8XEl=MuffMRu Zf +#-&'&;U'B^o}BPDP`wQVz..Rp=iŀQפø& .>F1 WcLF9',} !:v77t1{a6s˼ zE=XPu&?6aA"x ]Amc]l:<\ F+(dog.ɻ#i*~"`l45\%2} t6ɤByi,3((Ts) *ȻU"n.!.Ȇ٬B%2Ҁ.;SJ*مpȸ9{&L3/4ֵӵ~><= k LPaY#e=+4 [kg׳:U/L%O^*Y S]>pջv5 Ī˷iwJ>5\WS"Q>CЛCJ2#(Sn?y/9rrRrJ*_iXʎrE=Ha4n()q9P= \h5O >+[n:dL4G9xB>K BwN֋n3l'fkF*En*L`%PP˩ @\3 Ɋ=p~e,{EaveKth+oن5;7rlPů:#H;e4ߟWȸ~Ĝ=oCɏ;Qp7;oPgccj_R,3J[™ 3T *#* |̋suA.-#7QOfsQ$ԩ|A+2lZd2MVi`LHf>pv{m8ˍ/&b__zB*B?%?VoPO7p<4 W{:tol>r7=(ڍyp0z`ӢeZL^5za[~CwY}R>-OwcVdVΤNܗt^sO(jֻsS%aHj\nTT*^Vio`P9na; ED&vI*R7 ν!U=@8> ]zrv 'h M@ =p J#d=%309U!vH,dg>ntu|PHȪCB\YI+E.dY" t/j!n {9e] ẗ5Y~k^U :Eaim $%z}܋{,q1VH"dku;`6.+pU`A@ީed;^c0!&Ɏ#@ępeC>sMM"eJ^IuVJaǽ0ԹMbAJOրZ6* ᨴ ц= g3qr90tp{NJCc1ymU"F)H(ǭ[g!"\[\Z\k[..]/x1yxLѵ֖/pJd+X&ad9 DƇ-WQtVzo?,7uW3LU]hF?i0=| 9l* jҚ6͢Xsb5ӌV|6p/mms[(LLѦyP U: Azdn^* jK>zvo֯yk*lm8Bwp/}æ_3q.|5s|ݡ{q"j96GaP\{JtC-#V0_LT'Tq.G$&x7FBU5FʮXdfw<|HJS'p8ȸ|EHWw5P9BDƭvxRU&GSVn^\V,sEa[^Yl:,Z81P"ZG!#+s)TY֪=Z`?@l1jYðԾ oHbi_wV%}FB~0om4Y\Ut "OO]-T2}pc-xaCdp? Iks=͹at]M^iF%-$0@,$4 CmIaX(DZeqw\Podǁ?M'Ѧ]?hTn\kQ4e y!Ѱi; SdWF9z]cA %2na! ۧ7u Q8Z#6ژG/gh*VW6l_ .JH{Ff{?x?_(ʮ^ rk}R;3dcFF}/hG*+{^.D%23Uz=Wa0%u ti> ia@G'@ٻFr w,dvC |5e$?f)ْZ2lرeHVXc=UZ-l:YTHBK8Qoʷihqeۮ;>X ϭ}lڤ̊WW#4.%2/XDy=)uڄq6>dEIRgeAZڛ&8OC 7yc;+,vy˨혗$ ciلd;ta N>BN+۱Q =zG7I K9kZiݵßUl0yw.qbjFCj  bWUCW"W\'b7㺜] RS[>٪gupʻMR>ZVʾ`'x7uΎ8}eL6?SK؄lɤA=p"jGhq 5}$Ȋyp}aښ+Xk \,Xr2^a{Ŋ+it~ ')3Ij5fPs$py.  samZqfzRАNCiÞ1 ]c;KbQ1h_y4H:GhՁTUSaO;&J:  R<9YJHeI IHK9#u w2?&U]3ɂ$q4_ۨpEW bc.v2`8ۑ%4(j(&~8w|GIrM(#++=#?upbW 3.9Z'}(W׸ˠH/9WJwl ^kk 1 ex+Ӻm?ɼ@{:2}ua'~䋯c܎kKG㮣?#\,WWL?jYeAKْRSx+@%3Ƽ5d;3z ưS uɎX%63>b-K4:g] vQl%Ԕ Cx9Ǜ=XUn}c >iFfUHU9 ȩk2Z},]ߒWB:(~ӛEЮbhTI 0h[e@Jevw$" #*y%p mߤR6V՟EED+J[V󴥚./Qs%lFĕ}L\ԍ*IV08Fvh1v?˴-ڀ7C;Ni:G&ld&H >eƟwk& %cj}\:MMBH]uS˂1FLa"rh$LxI(~Jk<ȳvU1b޳h;(}aAGNyd̎s &{s~?\T˳fw^fHod ɉTEz+sg/3LԸ 0 iubR̞}Pܣ&d[ph7սˣ*$ GF`tÙP`^#~~\Ih|Woו T y=M\{[ QT5wF 1D( ]qX2E_;9T>&]ZkP-Jt);QJd;]qfs@_ b®(4b6a| sv0pZ a:$h/->iM_~|>}vubqs+d'v̓$ɊlWgXQb)?*WFrr!?qv J;ʙ0ź[9\ЏQdHN^ T$s:d#E( ]X7Yη)% /EIR9j\TC 2SEA^ɕ դz*MOn:/&R}13W 6@܅r@>Z-~x\~d٫?Xyzk/XQ&B` & \Q-;$R Z8EC -,5)-ͤ<y8Uֺ;;Q A]{(_m-x9a"m ūx{FyEkX hxESqƈ3֫7+;sRd!F3BF-@+]{}iE mO>IghtJ46N{Uu*ì=ci;+Zo=TθV B@%ln;֏R2͙e+j h4L cxp޺^e>2Κk%mD>…&pBk?fbqo֚|r\DsR^.¸KS<˙<"9$ '";6%;R-#)g,!!(9@KR%6Uċ^V抨Gɫ^,ZUS*VoK10FWg7Ir INheHЍf:K4;x plFg Cйv~ɠ3Z_SdK5 G3{MnvG=A+8ծqWwe Uj4~v h+Zγ 7<#ZE꓌+ޣU}ȝ0OFIVaAyWC`xv0.%JV]+ufrKKC3" fQA1TzE O3r*srΦloOM-;ɇn\l4zDoxȎ2+TY2&7!v Do3zF쒼Q[mN)`QFO>EAU-^[u4*zxw:F2D@gЍ*3vyC>FF6l-a&PݹoƋm[cg]hO@0X^9 I$$ʤԒ_󜎗 d%qUa#qBogT -+ֳTTSԖ(0n|lP"p\`6 #)jz\l=``>dd/]8~>tf LxA 4+Q "EP]?F9JєrK6F]7ik ]EKRc-9vChGz!^5bQ4: kM4bϩE~mӎN0ϮuI ُwon"I$8ESYlFȟ66uE֥>rVk:m֊^hE7EϖSuQ[Rce^:AH.IdsiK+0C "KUbd6k#cZGۥtMh}5ы`.ûy`@1S,^)t5|JQMeSx SRwDʒd@fX&9BcնgVotYƼWEЮqd͡q`:0 xSxLFJ%8LI,'f4PepmJ[VJjwDt+[B$ӈCTkv=͆#2w=yj%}rWhEzWLkshb\eNi>g#j'a6nF[mCi( 1 UBk%E \0rEc9YyHYIsS{w,hptwǢ5룤[pRIC'=1j!FB%ʌHЖ~RT$(<8ܜ ю{r5{EOl_z_6V֍KR?;AXǢyL;eT##zaC;=cқĶ9 ktѫ4s\Q쁷 G69D\s#:#^B_/6F$[лdJeUzgᆲHXӂ|=Ӓr*UݧP<TBiƝ7Z{99uᮘ؀1|^j݋) Ait <0f#V E!l49\ Fqssxݥ -s}װ Q&WOnJ0Běhz.u:mKڹ1vXToeSyv^+SCXonnbB5fO#\ qN՗ `tt.Iq,oo ТIC}[bMe+۱\ɤ\Rj˴l~tymޞrd'?>umVO>eOt DeWu + -@V.•2:)6HشV[Ǟ5 >h 輌 lK>T}d(lqenuV |pt9=hn5F:֍}aì^|zbcA{ɤH\B!;b}J56^x_ $Ψj'|=\"U -W46Z֔y_ob/w~ӆbCx C0ֱ'v?QC02LJ"* JȎǢ^q# FEA_VA<|r\ŗEt3H!ۈ̣RtY RXvYyRRΦ1c( Oч>SM-DP?yݓqIHHCvoPhTS7k,Z/1AfqksfHls؇3R6z$>J^H@cJ~+PaT!b YMySY[ltrz4}[%r#n#߼yAdCry},"́uW3W̘;^9hSdH_~#i-{>#Y"}U (lK=9;Wy>!eTeQ)E% JtиFR8]?]+z}AHݕ){ lrARg[_[B!MQtиo:9m(JUK7$ӇǍ؞J2WԦI6FCҏ}-BgM lmե4n2Wp:|s6؎Kwh>{m/- v{[(-Nv.1V8 $ LDdפ.acbBo _Ue):h\ plد[ğ8MM,чP,q/DjlQ])Rd*.VZPIa4lޓpEs^֧A|OU=_X+>@q ptD (Qx7P(bRZUr!"PƝt=fګoq[}t|GV;cu:u2@.)" h&`AtиfgOS. y!ӐW8"@[Z{ f weTg4*hXUsu :PuU4n?$eH!k 1 1YaHݑ%5.fO\LA o蝷'IRDs"?Ê^Tͺ+z6Jm ӵ_g2CmpԷu]1@,/Da҂7n_X倜E=VTD=mjCUp]{b :kZ\IlB u PafP&fFf`8-of,e3WXW?M/ة V$%l!Gvl\:T2ؒD'|*2IzGĈ7|<_NQJ͙y<%Q}H pEp٭;ߒJyH8%[#s2K)YсJ17S js[,ywc 4gVhT*^@:  5Dҵ2Ƣ&6}=ЫwMh {ěϙ>#86 %Ux#^" ;z`y485eϛ{@BzXSUцf3 LB bɞ$YekrTXPx6S3уl9p[ ST3cO8ř.xKvx;j?/h@2Ͻ)4^)ddY]8qa.:XU=CNW*'*#K@W4&tMop֖j&_׮!yxP|n!hg+PK3vŋœ9C$hIó#"J_> i"V Ǧ^ ʳr*MCxEzawCI]kҴ}W74m!ѓA*Jkü:f Fmf alrR~C/ ~YRL= eIXMZ%YnէQzeJÓα=M nVPŽ&Q瑑Z)zibFY:{u/īcqXWpX[ Z%Y^]JW1oɫZHKv(vk>U_F/ղ8j׵bXbl9R AJ+2mo@2)+<ְZeZrv6o ~k2|;`~6z a"$/>Iwe( )ST`bYYtߴ,!,CB2l)bAb'ϐŲJLGXE*"5muFm#U82"|t*Ui0 MYDLIҥzE[1һFbkF>:`}z"tI&xU:s*;(dEFҒ'% SHӡ[4_E;p QNt٩N]K =S獖 3D2p廃Sצ ^=c1ȁy#m8oڹHcn\hzZм CW,sE^kt-l2{CȮzSi /W,FzZB9~crs 5V!1ݧA%}Fv/ٱ8 ?H"= 2koV ^9c1~SЎ')pЪn.C{TH=/cqwIM5<)Nfh]~3K^;c1[vș m:σa]vݪJ.c_ r>\1W@< C6ڡ*o,ѷe*H7Seh׭c+TQIKI+J|]ƄJWCI=ҐB(PhTZgR2܌mf(Vx@:?p.北[W$3^~xňN%&-:S`/ߣ'xr v<1R9Cz˦Su.`ЮUr{%ER `AU<^F;1_XC\v$5io]o'˯NjՖmOק9ß~|z!k' 5s|&啙|2a-~?Ub'Oo&aߜ]ΧS~}xrsxMz2,_{?-dAlBN\bOܞwz&bD bķoXmfӮYj7/-Xp-,kmC5b޲噌l޶=`Gݼ6xo_^Am7j|g/{+H;jHP '۞-{v~TǠZCC;c=_o їGǿ?J?V\8帶"x-Z^qʨ,f(7^wmq%~C캑,ClAvyXZ[-$[^aVҴTm<:O_N#?\=P{~HS|śʄ]]>GyݝWO:i*z(tbx]'.;xea1sV{dz+]>8ׯtˁa5!=XkyF/k_q8y񷯺<-y[c{Wz.xb;ϗL 1gn_Ra,>]|?;ߛ̋}-;o աih*\|~EѼ=;zdgU@Y>Gx$9:?8@\LЀ0׍!0۵s+MZ z=o`WUYk}%Xy>K>{7ݾrq RqLBDE䳣6>Vd%@t5Jƶ9U ebl>-uWf-5=2"Tk GArPwq/{.qK|Toh?Y,#jzN)"ª1`Ī'V}Yfu*ZZѵ#s_)\ ?ڨk Ygg>4Kl[@5Q$S)zW:yU&glTOVCcXq>7C; ћ?^CMO_ո =?Ȗ,Gw֚^jD[ry NUSh|sf:+YFdx>jyB؊ya:>ѫ>c\DŖ{SVS*POm'7XxمY ,*f٬-\bM&X-4$CAd\nN!Cu9DCSlM@>1lkFClk.nN6SJ9a$8|^lsG&D,:MgňTv% w$Dжᛇ@ƿqwɟbSp}i:ki\|F:o*Bm7^䁨׶P8/Ȩ`scg]{ѻ2y2X\\@.୿v Q7.o *;>CBPФH~f "LHQOb9A# uC {՗i)t~f찜-Fb8kA>* dA:IWm*hbH•.T/lI\9G,ΜDUP: aL$W,^<4aLVU|?X(#$+M=[hLb, nt5\m]+`X&n<4AEq*:gRrE;U4V'!{/m݊*BDBrG>Qwd*S 0t!lN*j< {,Md m5T>5ƒujwZO/ap%{ q P]pXRN%]x P1b1~LyڣG3InJXlNԷ@(3n7 /ZM-P:Qb1DX2\" Jx@gy~@j<."T2:kېX0Yl47 * @ "D:zaH”DrQd9xj3tVI5%BPJ S.ʔL~_Ygc;n `i߻kD\;yhOenŗFm7,1AtYw&]oqVDvq;W48R輠>I?5eA?'?d(0nepރ>C?ؓ'-}A@"}i=1i1ʶZj&4B߷Sо4;4͵ϕ9P/N罡w3:%ld:\k<$r~14x0#zїJH {'#ճRJW}Ey1cH(@ ̧g16<^xh֌y . ]k=Ѹ\tv| 螥!i9w>>zc7 ԉmP N[fR!cJvpԀp-5]% D{T9WhT8U+'Re9M,uc#By֊ O%&)}ggDϪsǼ`!O0C;v8H3%{mt,ՙ4g/d(<\>i8ˬy<>QufxC!BRWK|!MXOO 8iHl۱f{-ɹ,g~N\@f 0 }qY}~BS(){Jei%}jEdE(g-$;Z\.ZTgkybfv5Ż~ՃhsGEh EL?> ;rpvrqRc-;V_{~mH[{) x܏DcO/C…ѣaB[;0SQP5sފ wݰ+~˻!8xx0zأNanT rGb<[Zk7Gz0R,Sf^rY=C.O ;eb,nE7RK Ѿz߯Ϩw?0zC7v8CY39<8 ѧ|͉IkG8 S-5BJm`; 28cԻa [B7Ҏћ?Z;ԡy:(WӫϜK<'gI,K-n`Gh|/{iև $ƿ,Sq kF겷1.;k荆c揶`glBK-3FobS.ټd~cCN!0gD*t[PEz?Scpl7rf,#"='ʌ7G8'n`4ntz!ykjBXFJ}tc]z[D Ct9Z$3Z [{O1"h2X"Y Ko kՋ^G"Jgn:5(xޕ ȅ@]ة $k*b8}AL!F,Wmcl5]օRC _~Kѥ&yP4!qYB nɹjRљ A"?"a+QwOBsTQ=;G,GYd)n7"R@l .>OyV#Ȏʫ8c,ք^r$|`Aߔs5Xd̲-q) t&6nIWItjVn3\kkC;Еa#rM7%2&(OSQQ|ɇڦl1z*$0mPJ j4Cmb-#+`ȵbje gqݎXcUOU==VfJH6y c⏡ィUƳm;Ed5%@V'-"gYq!rB bNe9zD؎?ݵɆk֟J*cUJxNbwqvƊH@ FZ~XT,hx"C3 =؉DsEE\U1EO>RZqvy_aչ&ޕƑ#bm2ȸ L,Ћ}`V6,17((JrIYV‚mU2dhͺX tmD4!D0C ދM7c :T'Kh,bwD)-.>ǿ9:q_gg[B$-ҜFVԡ<#cƽ!JK쫩 mr'1 |Fv1famLeP嫏Lm4hf$mitۑ@=?e_Fj<% R8adV)1Iqɜ8Z&/cpn`H΢D @"ڮ!\IFąqm 5ylfkٚgaB7ɛf ԒRGwDZ={~@#>j˄)ߒ0vZ" _Y=q2;0Jp,d H)Wo3%"\_S/it1m 0T>U_ z.qq(C9/>mNrCGp`Ե`e#u`^Kd81\͘HSj"(Mvcv$7(S\11D&ƅG%`:5Qr5Z<8w?AD%2&x]b #±MfE011L{dhCKPS$-yH<Yl :qf" =)̄_IX|LN,N_Is#yt#-}vYMuVEį%;SO^5(ǿvj Cyx geqLh>29@,9C<~QvZ"AŮY)b(HMH4LZ'`_Րa~?rKc5hΔڣQ kMCMX~-,a9<^]u_s6+%RoN~#lyp#&`09@{+`Ms d::sFOhzn1S]}Kl(!u衲mr5prt3gW(vcwy`Q.]ɢXj4WDƈB Yسve` otT0Vkۛ rÈ5d H[yt5e}E!#iVBX"D5c4 KKtjjnl=bKr"odjBvL١Yy: ?bV`˧QЌY45N-1syT([E;?R;ZWmtsaAue8hӧt?J+0qn$V}#ykuGǰl roxnl9j?O>i"|ZFX I8l P[)=5k5ؤ f9OyPیb(D ^iwcFR)g_d;G{Vd}:U t#3M2T?:-U{P++ `lAbPVkaÜY$ )a[C)Kd< SH|A`, s5Zt=c]Ga4۟5_,t1`I&AJ1l@4Rĭ_L`n8@v"݉;>x/~\^t~*٢;=_]xv_/7Qć>~;n%ҏz#?¼+wv|ϧ<}O>gq#äӏ~RY}1sfr8O5iz>t~̯F?rKx6|\O!<۰GWLمZߎMU| 6?sOB?}uosfQUR0c$w\;L^ЊaV,i%y5351fɌ^&dA1YF {-îWRf{Tg-F!~Z~unz6OtAهr4]Ʃ{S?sej]is (p4{ɸR[0/jҗIGlJ)*[A*^<Fޣ ÞIg$u_lq#JM<.zTȸ!tx:9 Mzc[#ACxY%Gh3>s7)[(z''JlJ(ZKo01<\%70;'^?w/9Vݻ.|;+IEw)]$0w3qCwv?.wGb6OxmB 3#&0+CnrUBl{zëυ A:ˬ4j}UV#!,ĀUz ߎwo;F8mxc6}=2stB=Om#r srTїlVVf[9h0[J$cʝgu4Mв_r!C_Vd oBI'?o'T[]\c0/)$ZGj[r8SA/$=ƦdMbRAAّc ʚܡߘ׶f_5&39y[Şy}"uޣH!7 ^!Wffm,8m[M~ܜG5dfb7jkNF/kVm!޺M8ɧoö,N3,.\Aj1ʒS^r7%J6a<J s][EA"bH5(bMK׬8˾-Bħj[íx'[EÓ~y+yM(uzuSS3Um?ݎy<{WcGiި\:J% bHDK֘y8І/?kGG Ht#qsS+z%75GDޯUWnU:iqgwE=C0)~5曉7 ,vTЍ0 $ƴ ?'oI˞2Z]wc6Z8}xН {0pPa_v6wᩊg"]KrG` %t%,5C^(NY܊[@ V> ~' <`:/EgiUX9V}Exv_cqv7T`v)lZ+[&,ɯkv0svў8xx[^=OY(/ک5eK% 0PfNgk]ɯKvM[ӨiF>#h;Z]>koBR~{Dǫ{/W}L~kǫNR0C&T Ir^,6THDJVzC^Nf?'FNnPq|hUl^=b椇Oxm3sg||Jm^ ŸH?^/p!WK*L5`BƬ 2ɫ1tϣ/GF\` Y=+$=C'`K6吙TZ٫g^էzbwTgv5ۨvngx! a}/VGI޾8fЌENnx~]m\N)Sz21\}' H{WۢԎ݃.1ymf4S+I ȬmT)Z*( dfx*.@>)`V&|{Ľlp׭;]JWOANK:$]}ƻ${muGM˂yUѰi|q>W R$P:9]}{XVK)Ta/ްګ5nT`Ml}DP 8v:VW1(~+$IebAbXwYƞfHl RU1R.F)kDɮ24W[d1%bx#1W3FSCcHSx#"ȹ~oXJ#NOrjq cTZ֐s"*- ̟s+R;%i[2QwɕFl:#bEyw﶐ I2oѐ5OEd*a \7>IR78b5c%eMPCu {c޺]$UN2zƸpDA'h81}͍ƚvmW(@UKTyy?_XS4iV۷51ICajRʱbcoZ%SεR9T`j# mK0yS:"F{R}} fh+rG3ec|vq VFqMb9f(@ G1 V-Di!z[%4*H&$Pf3RFEG`MB!14<IpW;L BNP}hOOo`"I>IP4Mf^S4g4GCvT nc&j,ЕjL)"Qpp]sjW1[-1/,;6x^*Ec1v{{U؛dݴ B0 Q晻F|=jkzG*b9RT^d_1.MdTV;k跉g^Wc4lngjC2k5U+CiFcܗq}T8$ƉQ8QG<1v7B^]G~-B6 _DHW4pօ) *ӻ ΁l5v6}{ç;V0,Np? qܟ "Q-TKv3̦K=qO=\S?ҽɹT"~ђ>,p2vqauȍ *=DQa98<"7ܭw#K0Flx+)cDlZəI7;1ޑO/H{o~űnŇ{Jb[_ԭ$9`ka'nŗMJ<>5)qˌbvf1hMJ|NI73}ъ'eyW#{s50z{Ƹ2G$g)wN8R_|sR;ؔ\q$:n\;>H-;q=I=2zo}}Wut=}@rB;nOJϻ/؛?S}oK9b#O773lm1'fJq(>8NG gIniE"P uQ(NtW}72pjԬ0ØhVr!aSG$wOp1q >1: ?`Ӂ: [K@ ' 3Ɠð: 3\" XvW)ma?~s9ˇ1Wc.3&TA(a=\>"9sq^ 6G ^t+IbqC\M#9 BqXq(\䝮ÿzXæ70>qz?|!55>5f%: \K>o߼9xU/cr~ow 0GM{# $()KӋ~Q*ys5ԍ7cGw $d}}zS[cFoHv}fd>A>#9']a|.]t"ٯ]8ʧvq󵝮 v3K {=}0=~g6՞;tzØ(2W6:;ժ>"{OcS|Yept3z:(m\1 2'(ti=0J ~p}ޖUP/cF8iiD7g2έ}~HMGtFھv ;ѝfk[cm3Mۢ?l=;k 6aP [=,حRa 6Fӌ)؆# =cWZ$5mpyB}9 nt= Čg*`.9'z={hq]3#tč@ޣ (}xMyNͻ8ۻhpvf_fwVqz׻&sQG{ޑ!`1tԙ3Ln!T4ySFSj"J6[ RlxK˽5qSGŸ#uZbP:g䟧/k9wQ}T+\F/`Sjy ռ.w#g%Dl\;dS:dCIr`u4gHLf5Ag a<bk)z&Zv=R+KTgT {?,D}w-l58x&$M&"1l zL#9 fZS-+CjtP"\~mY3.׉H/x#[=~RC>k{l𔫇E5_xTgX$EmVm;{`bU[Wo6HM4/;@7{W>.< ͉Ll&&G+NjSC@}\+~ÿ:zN4RL%8Գ[+COL󝑪 a"-%b2AF)aFg2c(I>UCB Pkm#G/{{;m,Ve `pvKN&Y~mԖ-Y$N\N cĮ ;,@aC. }"D_s> <35&)'=1g*^skV-ha76} 5}Ӛ.22f1}Gσ?]DQl$m !,Dzx ӵ u vplt_ʑ2޳ >Ԕ;=k7Ai'mR4^5MŚNL#ў[GFYCtHQh(֙zҀZ8h=$3{.3 pg\+Xv2x7w= ,&0VaAG:M3~(ǭqAˮjZ| >za*.ʼ5Q`?,J_W‰?1Xݭ -WT Ks\i,0r*1UUKJγwsЀc%smyc*pmGyt|n\Qr\.YU:EZz7CK}k+RfPq&7UȾCeWR \jST" {Q]Mc?:9]$ه/="Th[OiZ F**>EYgbF=^(Jh+so\xQaJIb"5ib{V9,d1/$ Fy bDXB_^4(L͡)I _`Ev 5Ds*Ftމ@׽]!="?#ymX{8Dტm^7Ӽz~ Ƙ8SK26 9!ߋsֵ{n)L6 &UN1DMG*͝@ѰcMh"7u4D+d"X{Ze^O,MD_Vv;"#Q3-ώq4Fh]|#mu캲Nhy푖mV2x}v ;^{bUȲ[`͑R,BG 6BCvބj9b*sHĚuƁ^yZGFhj1B["{%s!kyY̙blGfd7y1 lG( ڡM"+k` [CMo_Ɨ7yj^S>~9?O+'Wg!;/'jTq2>+_zL\#uFNE}S8(?!wdS2z Kª": ̃ / ms*.CPm^&zÓƎ7G?9eǃNOƁ٨( Qcٛgف'A~Ⱥa֯.DM_4 [۷^_\ ^V,6HyTn]͎DVN>E-@i gd\=⃼,Ay}-gʵ+y?<~߅!w !wXׁ~gYo߷pw%Q6ppvyzZ rcsŽqEs(uq+~۟<,`-5b_ ʎ)G 徝B!=d%i^sNט!4, bd!@-W d;TYz^}}wph% q3n}-2:6v&UZm*WXu{oaV^\zAQW4k<.-?\cA_Oˋ+GnްXN~oYuȇ~6;{_'+V[7xx/~]ިG]Qw=1-έ)>k&ߦ?X]\jV$q0 J+ȒrYbId]F[J+*Xkj4QUHv5FW:.S=̡w>ӹZ[A.?FDR}H,?buL'UrZlj %b/Wx~q.!~=jV M"mtļɱ= 8)cOбr莗P7ca`YARST9%ѢRsPj/rѓ.tUzkA ʱa@پ WM(%7;3-ܿ(/Nφ=`nvUXZD1^ *p>]k`at}inŒQۆTǗgUտ'ΊI%_:mMnLK\ˊ94Yf#ZZfu hYL̉m&BԋQz6WcNN-t݃#cKﰐ{J;*2RggsK nш9WA7.R̙L.Vg,Me]`m.Ȃp[?ct줻ىE+Be݃0NZ|XϘS~D,e!(e:l>O(txѶ#9쮅oᇵBrz6[" 4q0)2u"c`@3u:~x~"=FK0﹇9`Ct' 9-M?0Lh}3p@kɦ$؋@6{M7yPv;A3un<[wZlDEURA.Hr@f-6GZ6+;n+vбU:p-Nn-/ʼezܞʰ;Ĩx!3U_PP[h[Q7~\-Y.ڰ;b ;4\u]:\0-0""r^ r oc{̄zH%)+%2ΝiviTs|=s|+|oCU;GDq(m`OB-!Uŷ&sFU8E9# gM@LߋHy~wjc(@*UV[RJt Ԥ) Z*@{M >2 z6|EoBrc Qf<vmޑ/l\ƋҶB0K[g=Up9wYwHE++M!iѻxxE(H9K6naGV#!B;bvh처$'rNuo:t=w[oܒ|^tc >m-bXQ pVѶ.D_ˈIQV$ղ <|y[F.^ȯ̥ኜl@ֱmlL|M>FJNLMgKq(7}D{:<+N!]/ "򋬩ʹק DRfc}Ć]7‡iWڦI^K.|)p 8UZN5iݳ9(IGAwk<GɊe݂E-hDP191: },Z&tkyބH_L~C3},~` 8oBui$)zoMDX\T\Leq}ҫ4|LG'Cc m+9[)%,]E fy*/aVewsǩfO^T>ryj|%.}Ve}O9Cs=uTglLD Z5._g|?:AR~j^̥fdTz*b|9bE]OW%V ]'39;j(wIo;gxhw.gD{7 ̭ L-h㜕;n?t朎mdHBv#nVAlFk㨶'F- BCJaY#"1!CžѕGX?{ 5+Gbxe#H UmM h e M&"=6FEHˊ"X(Pl!BE}4s(J'%nFlpk.u٢„ɣ fpԬD{M'^7prk=Dq*2|fF-[cn7/\R%9\+$/+m fnK/0=2 %VcjR\SݔؤHM#ZSUg?X!x]_hF<-j[ebˢ.2Utr%e_ߠ*N'oVq2-6i:Q.^'ʚâ]ۂ5῝::HfY^_Ãޥ#hv=O[,I榟ƕ1,k~e=GqOgt^ׅJo*QLE[3$vAZ,e)^'ɋ* 깱頺ߥHf ?|WWC0Q6=FMLٝ "ɦyQcR{@%C7Z~o߼}x3rT9Smsw~҈K&Aw4m-e_J au/pdR U>0@T[t !B)) +p 6Ȩ4kE/N NMzH*lSstL~Y%&)P| :j[,@`D.BdM'Ш&A wZ)l_+{7la)*:J1srLb:FZ bQ,i&cAA:cS8iiCp\Zve~IA(NjRvTA1Ē7gt[Y;LJw):t} oɊ'^r΂gV>uǂSU3m#v'r'qq \tuMec`^r$@ R(*l%SHp>ty ^V[[2ΩF"sD6ZUrE𥹹 bDme% Ukmߋ9d[]@,FA<~&y5|R1\}lSO[8d9wq|XN,&Te(4XqOmgo n,)v"@zV,2mU $ÕYpQEWR lu>zxc:9gȧԠ(JO9>3Ebh,o%S{'j#[d4X"r*/.EYyAQT^:hEh vN(x ILNR$gR(p1QFTX-HC XmOG,ӚH_!b0g ͘9ջSJT8q5PH*!Ovh?(o?|.7kc q+RCexwU>Y Zy5]w=\?IpT_'j\m<2%C%}jӅr.KԳ>ˆwK7d)(["M2/.DNĐĠ:# 8ùKeH+x@>pAy+(\hluA`Z@ǬX!Bh=NQmr5H_rLb]+[qדvb4(ңQZa ,TiB 0gi'30HZͥZ"ZE{~qĠSF"3}xua}]}]k/}ѡj! !D6uؑKW09Crju8bXbyd9G6T2TA: XEn!Vl+ ~^c@_-'/idz5X<'G=1$`7-Eb֪%K.Z9WIaJE4Jƀqr%s9ɀ5xAD &wQ A^$Aܶ)${jsG ˣ°SdТ,t΁%ᝉphD,4 K`{/ pVq\)UaD멵+ea# * vQ@H᠑c2i"WCd(wk݈Օݓ͂e˖n8jPˀ Y KA oH%= b aߧ5V)JiUJkikQeZjyvLn#j`%(jeEQ9r+IF^]IY=CQb7&n['Dfu"L}~8姏S~8姏|kG< 1C h%kaCҔ&, .;ߺ|w_䒴; P W3X.+%IB@֦1]|V} lK!?X1C-}|Rr%H%ky:(C(;O8V>)t0Q/u(L8:E2~ӁOE$u1H)y)7c܅b_OIzv9i2r-bn*Ն~~Ţ sW\ÕXv2)1s+ceQblu<%q">\?%- &Jڐ^sj`m(=qW UxlsԢtp``.8ɗ$_|9n˗mJQG^37D͔q9m4zd&8Bv)RYs$Y~qZYSGeQYbE !i x'k@Ki>WXל|| 5D[8~Bv^fgBr&Tђa >4'K@ ޢ3W^*xY' =[čs!+oc8rs`tb hRK`x 4juTC!Zo֜F$R((8lP,5BjY`Krʶ%8Ts:2$\hFAڤ܂Gu (|̬ z!?p v6]M_zUm35ۿ/wؤ>[C,L/_]~WΉB">g&a|$ &@ݦG!R9~9yV>J&{ <)E[<ཻeh-kB!ՌU,])}2h ?T[jY1l1N[ WU"'"NX;(,b8!1x rh՗U=/JysH"^v7/K5>h:Nmp:2jf龌5BA$+< &s qu3Wh:&\'#egg+zܴrgJ!Vn1˞GSTof׷ m5Ve{޶PɃ3 sDdҙ}[yFFXWJ`_IG*Za6w@si}#s(:I 6`"ѻmRIB m+RL%J=颧KÞ6sZLАN4ȧDS,v4*J#mU?U邝a-='G &M&sI35:$,g U\qy(fa}TJꜴ+%;[#r28CQ[&ZpwPVuѢdԧd\> cZO߉w T2ΫYrv"GQ)bg(0M '!0H;#17 z`Fd8`NeyP>N>nGkC-HJJYCplQ6 "ud2ʼn ʮ UQQmwa*u:ƽ'N'o}lڨ=y!_K j7D{D#RRF{OttnFOԌ d@Hy@nȭ(z=α}4K%P "eUN"B@pZGJƖ $eq{'FuAT *%_}YOa)Pھ:wO3FgeuY~mxL\͙eb@ 2 T+0Qljړvkp;WsIҳ)e;cP\B* 0E^*:z}hP2_ԁ!PMȟR5䓇\!er]!c'.Qw5sWY"xyRZ)UZ &[a' (:33u'9NsfmHa$ژ"W,D>b2KL(BH &<_QٿWD"8JܳYUVq)iƌZG"(q^H !(?b(@q:] `U,s"r"a)ne b %QjmBEW'A %E-N B(p )(GcxCT:x! rц"8u;ֲ8ū(樋Ya<\EƈjF*9첊u!AH\;'nt@D'G>HY )ܫ%+gUb,8w` op$2XcFp3z|? @@ECJ,PS Y#T+5o@| d|'2Q 0_)Ϲ#Gsfwf2{2 Y\LYҪe;`-vԲjɖA*YK0 Cr1[9G]L.9˻`_W,k@RdvŲv0MNAzg8Y11b(oOaIz=nk@9&;ߊ&N7^Yׄ@'ֳo{ݬ ,z\H{9R1X3Zen!q$" [V3=Ś̈́JB͐e*DkbV `F شM@tFNsxZ@e16A  -/`! 5Z2- AHV)oM&H *-j1G,Xj:BZsL;CM9erMbLvv)B)xPB)zbGH0LKA\SZFnhl]]ގwaHLĴH\B#q Te*Aԡ-Їƭ[U?U#)qɦh hvߛp|6Ttk7~nd\8ez.%=sr0Xv.zvv2~%+xӊv}$껠!0=Y3cK;'oN{+pgl9VSeMB3O@XIВ굈&~ZXJJ¢L8\:>PuT?ҹ.VnƁqD-MlfZ<v!hG:D2œ(K*S+ww8SxWj\[R8z 6 U0p-B$ EB+]Rp[@l}r|/qK K9b ko?6Qy3ś7Rk6nK ;ቝp'HnaIhԇj@LA ;̥\fu֞n>$!٨P ; \JC|Yx,uQ<(XkgvRC|q,y8Qzg8 ?덏 ϧ,p#W`YVxUol.QWd~UzE?]^.s-K9(}^n zп=a?53:KU]*"[8ӻF[_SfG*נuFXNdzSZMW A[Y>h-(ʽ}8N&:;jdޛeoμ>Ӭ1ͰߋoO7-է7R ZE8 (?}?M3:<y'hśq m?\}{=rHV] N/3.ZX~prK&)BүPf>XgeÛOq%gWVo({3d&- /G؞F|wHAko _a )OBb8~ }~?|5H_a`q026WfLkZdi*_G,f\ֲ’-ީЙeseʅFDXH102Сqkxy:̋'0 J-Au>3N" som (tN7btsLpKs1\|ԋX~|HO5]4jwA% &j&&yc#j&ju4ɋ4^IRq &6@IbС?3MR+VZj5;5Ɍ;%M҃.ˢ1eu1c .Nf֫r])vUYcعkߚv?v(Õ,slkFף4\1r[]\WϜ /|xg&0M{&=t+P{L =< j^>8uo2U`.{׈G)͂|Ńl,4⥭Uob ̉`Pѕdowe~>#S*X[#Y2t0b{X UsI_dSȻم1(`{uA]5yÝ{>~xsvܥrpa;+a=HyI3o[D$VʀYiq'$GDF-p6m}؞ RH>T6=-]OT: M cM;9uA0Qt"h{C Q`ۀ/SKЃ10|O"\iLF*%"x7[0Z'n1Z?nbt>b]-QZQ/ -T9's;jЛwၯl9s_|ۮtܳ?)SK6&ە-P.̳wh,MେϯN=6T$Ռ (OD0 xjMl[b\!K2H?z$8nVlǺ`2Ic?'$Z?21gQ-)D1 HGMԭ?f̙>dY  7؏{TAEZ&!MoVlZnx"EG2Al yUR2@)(u&ahH3uI,ˆyʆuBsǖF'7.VIAV qP?/׌'=ƭtmm}iss.Zq[w2AO.56nB`&`1.i@bZ$l:lSiW#\KH0q[]*PKdKnKMZ]%[].y_BCHD$݀@E\VnELJ_Kf)eK ټ\vmbը(G1A=c$S3,i|7fns'^{U>8޿ 07N'&ɁsOO~;_h;]BB;'"v)RN8_^9qL>$yLVU(J'{M"A[8C`c-emp(Sk D~$0hkg$5̛@a~/̴2-@dM[>zʼltmu̷<9A )xgjB 'ܿ fAf:4&_^ GC=\,#|e<#ݿ3^<H3ʳ!@1ו? .,?_ګY[> x!>¬J Sg{φ7X[c(`jb8PzN9~ }ǻ7`~za l"3}Y[GItAZPsE RX0~YjŖΈ'@gne)+b7D*XE3_ F4 *d,HIx :Bʄq(zHHEE"@w߈,HfF,S,j}"@8-hcvBѧWFtØDe!Sa-t8pFVD a62(PE0}ƨ8F %:b($HEqc%WK((c O{OC\(蝈5>_`-|4☉{4.{ɒ u~9fk8+^!# VqbDt\R ҪovHC#Ӄ&Z>"WxQHi& WY& ߌx $UăC<9EWKJ1+X4/ǶyJ))QcK$:'gdaM?|PorlƺuM^tyͯ_L؟|}&P]o#7Wr8`[a2M0`em+3߯ؒVe/QLƲI֯XU,փ":YbHB5Q#J[O%GbL]@c)\j> UŧwK?=o50NO񷳻nO_6'o%w4}XJy[|/JV${$T{dX^)wxiDNr?^\%njnOQ]>f2"@Q`4-6(6O o~}H%sl7y!L*[{4R~__Ծy=fsd.({FiP%`."EOUXqxg-T>+|,YHUv*3674˭ʸOI*d)Cy Ho0tk.]ԋu͖Cezsa>[~ÂC@fkCo  s֓+ۻ@'p,f3j\s5ʜfLgM2JDOdoQbh[mvI$p+96p,15& Q'^{8<~W7"KN:BnU%9۫VPtgbg$)7T365#g `2KhHm- wQVD%\>[4ϸPc3Tq$ J){ s,udApjh'8iUu[a Z1gTsT3 s4&vaFMNs! اu%8auY-N!֚6(M{ZvcUr3]{٨ /Ҋ$gtl6!(KtZݰ}"m6ڄoBC`N(JoQkm |]lm0⑁O$Qiؗ` ᝗(`zsgg 8d^92m<L+2R,N+sAFrdfsd*0`.0(,BZ;E2,\"#XEĂ7Zr8,\+8Xr59@q`Ek!jq+2 p +9ԵF#șv `mGHUN!2D) 6CȨ(S)u-F5!' PA3m\d)) 5'd%azùʶ!\W,2LC 4ȀU,C&ByNRy l8ՆRܞƞoQS7AJc8ԗaJa0j#T5pMfDb&qla$3F&\s$(G(6p, )Cm?-w LC]fc+XѶLj xO dź~0u%x>#NO$@Z;9aD;yJ&;d;>TǞK9ȼgS&9V5IfinfKD靷0%/QgX2qFźLԗ \W*-Mu:^Z{pQP1[XV78 #YuQ>O":h|5+(װvao\R)jE7ʷ a/ODOB2yrcsY3j.3S/%3^@tG7-IXڈ (U88i]o6J;5"ZZn#L[=LW]@{++5 +@ 46y'Ap3-v 9|9ð4G!rݢ٬$ y~oVl#0 gw'|vMՏ{?__mK(ot|7[vpV8ƪ5 Bʍ߬͟[)>h;ƦUxEXCxp"&aLP 5՗ooogs)/KP.;9)JN@O7n\Snmi:mh"tE%O־qktC~,ڊ1Ny>k EcK.!q$N\,<? 8#`i{8T#i8$ƚ,yO`γ2 6k=!!%K ZftDW3iȎNvj1_~s0%la,=ZP$Uw *pQ[jguӥR穃O@2{(v//calnjW2^jNV7]^AP}I➉C4[S=i'9:4JNrn(,.@FJiPX}P;Ic zfq:j]uh4>X.&ظ!OzNH:i{b> ^:i z,P˚E'(LyEp*>_:i$,*1C4VP.>q:{8ea OsȚ==c0*Sc<{+lw]}5?6C?WiŇ}4DKR*POhrzk#4s=sJ3^,jph(H4|S}wV_.0s?yf%B] p_2*s\h_p ~foezjB# -] B'.ʁ* ϝ;qB8k&Ho=w HoMhp wz4vj.C(=iӢo[T߸h`I6nDKKׂO||v|\?jDq $XYj<ظe𻸨C 4KژQKђ`M~<&X8_Qդs:3?/˟Î03kc>(s |uP+4Hm@!6R5m 5n:ea6ked:@Ÿ#Rq>NBj($76@)çh$ޖVx_ Cd>+wFGEb>=ŵO{\"" R=t& YJNI5~kLvw^#eI{ߡV[JIGhk:\Wr`9z.++~o\Wzq܇,T[G=FQ^%iu*z%mQ J, F,)'bſz# V^6݃=TKΞ^h(nSrxb:䐚)x9>xP s0z`qL<ΑCPf:1)؊<w p=ti ~ozp{z a48zc+6gVi|{~ֽ 3v+XeLavTj Vc/Q`}ǽH)+7yvtE|VZɾ%9Χ{8Qi_+4BEΗ2&)uG%zÀ:Is"7rzP@n,U/a[Z Wvޫ]z7WqM%p0|GVy3 .ބW9wT(XdrD޵#E0aG6o;7y쇻 >!~$g&~Ŗl-u-gےfկUEXk0NSо8o^;Tf#+MȏC[#,A=PR9X28nriU2!yb9KF:I`éOm"-:V;5(u%%jlL:"M}BpP>&jR̚<\O謹^7ޡ''=?9`g3w,c\ٺWd^Jnhƛ.kq9Vso`#_$_.$?'mh0_H>8Łk~-W0no̡4tJM0WO"snϸ=0HAAӈE7N4C)hB @?G]I$n랯05=uƗڨҴſ6Ϩ-k"ȳ2Ζ-5N1BisG 96ciϯ%=#Yh8g㽕-TT *[5nDyq(;Hr Լ(v2>';T(}ljvK1}HR~iR :7A  Xv`3m9aH*=>_Voܭrx^cR>ϖ/PIAr@+vm CѰLd%BSA" %.QKRBIjAkU(>-ZYNkŢ/ZrzO]8QBQx?KB9 G"T)P &): 'R<)zZ|;)0CejfJ}DŽD>=vz(wߜG`pn*ES @a\ H"2Go*LZZ'"Ad,Ő}s)).Zs_$Heqɨ&Xu;f^P5#^7-79|[~EKyUuIYwoъRQ^68XTyʓ?yvʋ'P9%0]#Or;pSm>?OaZWι],]{߹Z%nж!&=x'婖`OV9߷-[ɵ悒 f8ͣq\`>)a=r!qe͹ L#kc׷m^m_]}NrjoG!B M?o[AZAC6߀M]rFNo.j~c5MKTSzC1՜֨176}tdJ!:7ܴn7u+Šź9F3ֺ!_SRBKw` rYo#- dN~|?nV> Mnׄ6݁I{1%[tJ:{r.k=;CM.b{A7o.}Jx@ '%y~A\ᅾ\{YB(r7bU%\90y/?ϲE<[R+9XnױR/{:PE$)<{.,lD1HgޖD!Y"JG7\~wpW W k^gbXX'Z1ҋ?h\|O#7ih>Hէ>HScK E*B>')܌-Fjws3vLn|:FVtdu_q)hϢ^EW6M&&p ڊ#gODk&2tC >Hu)dA>I8hlƁ^'#\Q%"1|+$# ЖKA{:\D3*UP 2 ]Qo>Yk%I6 QZg9KGHŠ//B麳*<_7ϭq1R5I$eV(OAEMh"U|9xi W}ᩎIVp%>D P;#*'HNEёs }O)j@#Q @75L$%:ε"DodcшQStmm#id|}!S47RUBR.8"Z#㸭HHFlTűHjz#r6Ed\˛"ӕ-hOƤW% SerǗN\c}8B#OݕD Q9q⎊?/Wş+R\,wywXNsY" ju;¸h֞y[j"(ԮPc*Ԁ2CQLJNR&̖Ĥ9nsgig6Lv=pˀdP~EpY %ΧWo{4UwObQCǿ]@_خ+3;X+c/lzk_1}@ow; ?9 q <{ga~SCa7YS\rۓo3OQ>.AMϓ8 uQ:Ci`@? z`iKc?ٳxĞ#oAT煌x Gkx~i _]첊O*>YJ=8~?s bD'Z!kqRCJ'j}jFv3# #"5Y- ،o& ٽύCXO\KHFJNJ7fޑξH`a`ϟPB.)\Y?כLUZ݃301WukȪLG]ww'Wg[n?Z"ay^5`"|f6?&?ӫՇ~m%.|]:%TN LӞ>FގaO~ gUuثz~̃~2_O?S295~;gDKN[ƨ),X:`H\$x)`ǏV}>.$Q3dV!yDm%1_oAwJzﳌ@|62`3<ZZe$zmb $ZK"QMwIBD=sH :8"iv@7 RQ$ln%Mb"TQjXuC}&@9t91x,rZn4bOjUVӈjcp :gF%`)MfpR9*Eh53񮪐7Oo.gˏ ڠ"e)"(M< Eje`t(SGI| EBJmS. D*b>wH`A!{HmKH1)a (Q,bh`ģCu LF mPv0 |5CiT"<QYwE# a,f6I,03;nN2lJ@Jv}mRTKrMu둵Œ֭[SU]IwKMGOPTXb`v>US(*iޜuKk+dn6H-ŤOwM*ꙐU& ZCa"CL p\C.&(3%[1FҌk&i`LuRtc#U넸9*@(r-sN<\Baz*TmCpm1RJG'T`!!lMAYCub[0W*9t#(U'{ R{MLܲ+3 jF*VS9ΪUc;'gu/Mj,<5:d> `ݩw DW`5  ŌAEk-cn@4,D,Kf&bäDEhoH#YV"rTdJ4ϔmpT9**Cn 6{Zv T7Z$Tv@ Q=H(E31k6:\H@lKHEE .cJ1U\2g$&tv"\ /`&k0s`TKh:- &j4ց᫠"ؓ@tL4\9` #UU1`vZ5yb õT]h*\Z :[%*tsLNS Z: JJX1k2SdNC2;"!@D-YO[2Z%A 5c[fS:K& U Rbb*2{dC2 ?c 9ǡLUi0x@DT@u˜`vd' IR)y  Js<+VB3GS@BhN i, t,ZCUN%KPd.e10 c9_WU-0jb۔<%8kDPNecC445nLWn_A .I)R,Td)pai `ըU=4 "}s)TEr!@"RuBu$"zCi"Zy8#EB¿*` x꤈#bLchv rh62A&kMJ5UWu@ɳMdoo Uƺ/F%ӴoUIAJh\vN<Ɵ{aZN?6 lyZW6l؆Tir¨ȷZYY',Z@-$t>^x1:pR7P->n:#3MfY &W$-YA0 ANj F53".Q%* <@ 9-+d^0 T. U 26VYѸ8 A}QPVǃV&r<oŭTtPuvT& :U+״zw4o*,pa[&sJ $ aH6Vxm{pX]_o6d*,hb⮆D,GqGS ;KnC \ZЋQuD!͋DKZe}t0B(5Ԙ Q{TKQ hvE1#,=%A\e"3chdEr[<&Li($MʀGhV^GȣLs5ɇLkd?A j@Lա o 2ZdhX(FaE^Y` GE*C kDYl9i?rfG,Yfeo^5 T*״~YEae4+Qn%LzH/q7$,2 c\sɚRnc-A{vczr_~ l9 R&AK ,` ŕB'X8L&Sa3q0`M;,E4^Gu[Ca 8EMsGs8nȀś !u>fds^ 7 tG-ٓz _>y{jU-+C<,'*ZZ\09cP=Q]착/GͼObOe8q#ł_ώUY9~f&jfק4Fkc~<>vyCCyZV}zsڧw[' 7Vݼؽ{X*z ^ ]cwɏٽFo= erɛ?x1p K83P-ߞpXgjXy_L^ܺuó̎W/w4d1?3\9zl]Պj_s־x:i R;kEy?K5> /<޵辣F,n umn F aWlp;QVޱ9^x#h_K}s.\r=JpΕCbU`rJ +=S5)̢kJsAxd1W\kx9]&Gg8rc>j H+')nݐind.Mp`ƚ`ֿCwiqAwiU=Mc4N}/M޲TŖQb۟Tl.NEę;sL:z)LMUzalc=bu>vg| Az]t&?6}mC1c]|{jZX|@t>3GQ2]E1" A1e㼎nUj!mz{n7o>mhC{萘2\C7>m{n>Unƈ/G#h(L拓_iIy].=='['yѶggE=^I:3Lf͙JXԄ.v 65dh~<*bg{M\((j}l֞]i5E2@ŜN|hNo^S[FYj&͆X!xt_]03H~S#^aH}f!)&jUs43?S1+KF*. FBt+S'mO'&) xCo4)bJ§:ZY|h: u.a1;-xףѯ8,Ф@)5ߕ0}pbJL&]3ل1/8m&‹l'v<;GY{i1WfNWޔZ8XWΛv\2=z g?{oY9@tyRř'uX^;m#)lW;OeIuү0uo/Md~|~t?>_1,OpP(xdlc(X.Lj7Z˽Sؽ |sztl5_ý㳧7۲fMѯT$77Lny%mFXs5Ϲyxe#E_>gh뛁v8]} BRRhW 7Nys,y6?uѡ~nQf&Mz9H(]h(krE(6R *ml0^O, ~':J)%inIc>5L]h) L" $\CiQj(jX|c$ Y3C3ۙ$Cn J5ɞA d=ՕͲ: Y%#^5z 1vwƶabWp69d6Po* vG%VM^kA9r.KIS 4V@4F|;=9BF꽾2CV@=Nƒ%Y?+[j5\m!93 j |vB둩BAu+" pXJYk@Ad]6z`Qf@Y26)sK  =e0h1y4K`>V4)z)8Kf O*ؓ|tb]]%` * 0;P@rluX$R ̊ZzXF\w&7@kK$Q"!m# 9IcMlD!NդɜR޺M+U Kb@eڔŜxNc|j#to \"Hd2dCf*>\(9XRPT{Bc8WS}o0:AdXh)y^r 0,rj;:*H,vԏ@cx,,$@ XY0(#2 Xeid!F$ ͙f+|J *2u c@#p$Κ`prǤ,xFBZ}(G.ꃙ66׼l22*FYq!92/ϰ"bg3~ְ)ӡ>CKO]Z/ zB͚VbxZ66a1'EY @z-ڠQ:,|`O:AhcŁT }Pd 7u< o đ?X:ɂN X?3yo/2by릎[t4BNGO2CtR8߲p+dSliUW}^[{D[%]:١ށ]Z.51XϠhC]#tsM@F# P PL(۽X=nU"Tۖ(ZѮ^_g)w ԁ/a *Zh+-0+g  `@Z4)3A( xd!ni}Й$@fz!k 5\TAZQy(ס;{P.JRvb +EU2à@-DS `k\Go!OcY= wrb(Vx m; q7ܫ*42߬rW@ @~>:}JPaF_.=hiѡ:m,c|^Mo`t;9X7rr|@%FU&%x$P@0p0"&h+7 4&6+{))ŴPf1u4J֍V ΋E^@lwOtz6B|YiFn@!)!A/Q%C/vTs^^ 722Tz/DKG"HNH@Jtp l+)aɵ\Q@Olrĺcl꛷ƱB?a?`+C>,A'8)JcZ 0pR s*4FPR1 H0]8@U4h{f~z~T:a+48 s!sib Zd &ߒuLm!2@xm>#vڳ[yנåt}ozYFzxC6 6րPpv{[A7 )gDzXW dx ggӗgVrx"?r}hxv[6;?fzw{/pn{jlw>F6rzKmgi[vȫ[{ :[{+>[t_zLu7x,|"r /ua?2uہMZ+2(jJmxt׫ !!zY*:2[MO99;<](,ӴƼhy=y~>oܰiT{;'x(Rk|?scr5i-CUɍkod J3k|.6a[!/nvMFgWw[^O~x)@u184K9'w<޻-s6 x_7v=5\n5 萌RsQw,)y^#45&ql풱Cm$ W洣^ك/3Ҧwlb| h#/ef~ѮQ}]I{Cѡ'~/r0ܲm8kjpG;<8oj 5f g4TWw *]eI{l@^ݖ|9?nł ;1V,/k.O61&XI!P@'D|rI%!xg}šil~V۲A=;܋-I)PMQ{BdU~[ף0f-$ǖyn8>ҹՠ֯.es1m*ʯ4s ai.@ˮrqd;ފAIY }X~wjlEpxyk-Kz~N[mD Bj@mF,#:9G_:96뙗֭( *'_cA? k8{O݊um}(ߋ2>'G֍]wowqMt4S_;o3GR gNl]|cqdpLIUm6_;[s,P=`zAN/ew{fꪮվAnU/ m7s<G Ba|\'9=>h'Óۛ|ʗ)mrW?o=aMAw}dR'qe@cm"E=f%9 Ē_W7_/>{?_E?j帍m'=ɜ `VY3– 0ڻ1y8>::9]%64$]&_^Ox"`zŀYLA[_ia'ߑƟ󆔈OwWa!X|O@ md\|߸~Z9\mJxp 9&/(=NoӛY=>y=[M$PL4|tv*wSmyҢ+ia9%_Oq%ɿ'o6}*A荟ܫS?k\.[ g|i EOĥ]ُ6W ?mq7c~0 6^ v-{,{fߗ|ȶdSH`ڲ,X,ↈt$T 0=T|~z  ݬ iw6!p\]"ngC'uTH։abƠxD_IRZLfڵ-KHF*/=Nԡ)֝D0PDC1Qzvi%̅e2˧[{"P6 vJS :Zq8ejt3 \ԻU8umP eKFSU)dPl8-!eWX4,g_ir]?ǪYV]dj Fqn*w; C[ GKҝ{?f[MNtIu$jiY=)#s=qja]r;^S" qW / p'#eo;C/$6A>@,9BqzTwktiub}Ԧbc=̢SÂ%sQ:5݊V>'ݫE 7kTV;Nw~ơ@rMӞC8԰(ǹJI)GܼoNѣ\ POY.+} CBx}=g@ȸi0R)\.>[ \E$<{;Jw̞gt70ϑ;΄\nʵ= gj{b{Lh)OSeG .aqJ(h3uIK^.,M ,$n܁ƇEEk)i=I\ WB1MagקY'SE?&-llvI‚8Ɉ(t L 1Xck;!s˽|6Kk6I{tQX97'ORJXO!eYS $tO=*8dM ;iStDrFYL*b 3 \v^u@l@X8]YJ_zf%⼦._g= q +fHslIǗR⮧aIMuaP48VE8EIEUZ .Dr|@ă+6@T_RM0oqBX*i{PWffu5))՝8}v^FaCSΈΪzj҇͜C4SKlNW.܆QDC2Ac!.cF!~ K\t*V+Jaqj xQwft:Yd o"v q&ũcFjvޘɓt I}gi9¨c"k; .1Qkhe]Vy9.*1dV'!ǭM6$焁s QԔ3)jGQB"W{t/& .PeNzdNlΑLO$P,Q9%w-0s!wg)g9`L8F.*~.ٛ0 NG Lfoqfs"PvOW }] @.I9TPR`9qTr3 Wu6+Ԇ;r,!|5+>˓](JG-&X@.jÝxnH  )#œ%bqjzm@҇r*RP4Z|/ۓ+a+hݒR.vys VR 5뜶 =|Y:η %gCS\ iLXyx zE%=,nT>+ P@D!$IawP~tK݃׏C`1 D20 €y$"0 9 C!QF_SNm}Ft蓡d'Jb(DRsAX)`:ٛ{KF i@s 7cv;h~؂ ԢMSs܂AOӺ1:@laXvj|}7"sĐr /x<_N v̷j1/Gc=s^M W~Pʮ.1Kώ&qhc$41N; !8sq^K F۵ն$v1ZEm*h`[=>i6l6)wMW  .2EMKG~~Dp&ql^gN{;2JG&YSkv*oQaW!K%[A (@˴ҮXkq#q, EDtf|.4qZ…⨴mBGDV5KX:Λ Z @}EYQ'Ėی#Ve.ͱOt֚.T G ej⬝hcK>4y-Ye^#H t6g\Htc=[4Kl35:8N*F K>jg)1jhZ3 ϴ4ڗ6"1l1~TR}G Ur`sDY[-s֠%W2zz"^Z.urT<{K{ŢR3-gUoK7/ɍ`.0.z!# 6o z2r^sZ{o,rF/{L`t׿} NeܨZQ-JCZ'lS~ /,Hp9%Ea/R/ Q7ř$ҤT!^HQ*ZH9j2<|Nҩ>Bd5M Z\W{0ɖ~ODS_F+ZDnύor&{n눱WTsK{:SOK',ugShwC,5ml-Z]ut7uf /IgKߙ+vsIȌtF<|nyQ_D]qw){a=ݾN9<^Wpa*E֊*\*""B9^z`ڣֵ0Q?HAނe,WYZ eˁ$j%]1zyjJ~-L),Rj+30.Mȏ<w}!ZlQ gI_R9[^$RȻQXtT/N0X2O~'wE5SEdYʬn$o.2E ep&u/lʊN-ZO@Kjޛ-OzY˭y&"e eޮ_[X*g;FԌhDot~^n sï2qmE+EB4}]i(wCM[1Dcd@P!`0 "pGd* HDq@0$9 0 y¹B%&cxG rĈ﷓b$0.'XKR]mmH%,|((}+ t+V14C0ud, Lƌ& %KSq$cՒ È0$ d\iH(Ɏ(LPJ92P]$3I'ԷL?u }d2C2N{L݆ɶpU`~k2ne&+n? 5cXkUw%֚nZ*\@pEk{9 ̹:6q;rGnmw:oDZn ^}*޿|hRj3-3wznђ+CP_⽗$ɮLO7zWG.1rmkXyqWEX$u5C$jӚ<u/\L4i5(IWk'H8ˡR8;\ζzr~M)"S 57\{p=vNCFnjy{t3A贱i-.f0rԗE)٨ ǐtT)B,<8B)hhbaN| aN]S 9;Y%"O\!M"~ސȢ@ܰ*3[2ie<9XD9_ap;^RV&#p>Ej_fñ_:{7NDW" (Fq") W8s4q Hh$"06!~6ܥ{׆-i[6t唇 B<3lS-&(uKKC1K>MK_#Jﲥ|>HtdB J"8 Ec0Ih( #Ld0 1$PyH^Ix*k5U, n)Ak*N.T +9eŠ`!2[36# (aq )B!L@`#L HHk e^1NDHJg4 p$)q cT"N& < 8nk%@tVFbgҤ'ʷ (6!]IPE3$9ֽ)D_gkN&4Y9wi~dML鐽k38:LAyzKqѓҽ~z @: ϝxߋyS8/g8~g+D_o:X7cŷJ AR%D1K/Dq:nHʍLL]z<b;j[K-SBp0- F8,7Οri-s*A5"F#Z2 R2 >GjV5{9 +b#fC_tnnMރg͌ V! 栁LT B;=evb5Y(͚ +0 kr;QKlD >_g(n"%PgRA4HW.4_Iv?]n+νT*)<u iL=Him}di-:2{i q-^1hX>El5rk]Y.v$ ֔9|ޡ?FߏE_YBh QNsr.Wn7 1гqV T OHZҎiyEithqI\DW;+Wލ]j=iDwۉE,9`-*ݕkou 3ߴխn3>mJӻ12"irYs *^tMq , ])nr,[~zӨ,M+ޯN4'KgSF@Fy& ۞`w;n# kLAA &(J3I>{cYӵJL<4,հncԚT`jd`52|Ju]*XR%94Nc/l$G+II=MWK@\{{,(X{V̺Nk'QZ]wMPIqu̓PQˏ +^Í!uкXod $F9ʨe֡APH/X9YO͂21>uzn:77j|џ)0Z=gm*9mW078eǤ5FڷnYO|zPY1# MM|*yS{^K ae+c? â?]xp(\,/2" cYϛ "ZnU}֢LhŅ|&aSҦwԡ5OՕ9=LdѥBDuG Y0Yɢξq”ം)bIbu$(.9Ʒ|w eݟ2{U}\^Uz.̩:].:{R˂cG`^gSʨ?unIBxa3/NlIXC@Fhĺv.\(&^2 !\9%jl\&jixcFɃ4DjG;}Ӛ\%ưm3snu^u=a05T^Nq8C탈lPm gz -f-f΢ V/a,|@ ?aB.v u68}*U'he9( YMe$Bg^0O\zH=.L6;fjYz,Cn!!Wd #FL:I; },XIkh5U<g6H]h$" )@-bYY¾ʋh*vtU(ܶ;F)b }BQMߜ^ vMP %NG0mQeSi35i`'TA$.˱ ˳7=?7z2sxƣ,oTXp]GxkjL2ƅ!B ߵLMa(( dݒHOOf>1'2 DKgԢxK?|i|u73н t_DiCED8V"ڊ#E PjN 6 ΁B.ܻ6DD)Ws/"l:h6Fq7[:2 ]oER|=nX p$$1!PQ.d()!^2fQ+8Q\-J:ܝPt`x烜b_M25XOYzXLGؚMjk)0T$f R f$ Qo #KpDF#cE$˚1FTDjc:jOH@QYBT.%*maB8 *(i۹YUBPxPO""!`"u:'q s!L/`}_ælikj/~-A{v.pwCϯZ#`[OW*!޷@>T⹻}u |1ПB?8OŻLs2MfP9U=]|1j7 /h-InƲ{Js=8b7j`N%FeKtQ-/8u^Ɨ?HҪFR=!!8ܨ$hk;{0R0\5ar 'rvL(EsUxz)O73jhXY%Xscn0RL6.1L؁CP? *Zx6a1p?9zKSe^!NGQ$Ok8,!:\>([ ,Mn^;G՟Q*'1{U.kΎ˖j^ChQI,+=DL Kc !J"_D1(J#hcoƬVYGY;.V)dyN>6@5 PRqtR7[_i ߫`v5<)O^uih ̲6U]\fH]a2lл<&;$(_>Ӟ q$,4,I6C;mn-o/ۭi0[;+u˻K8EC 9)¢%B0J>›@ *s4؋94Ic&9A6pZ}>9xsܷnhV96X<> j!-~OU 6xרh}%tA=j9YN![aQh/RXb8-" g]*cNJr"bz}ƲmmY4$nVzuT gǚlGXt{5j^8v.iT:2B L\孻4ZR7f!Lg|W":^~ Cdq1=G?ztθNGu4}z1]fOuipY#UM}F6㩚oi%2ZgqVY0:b5ɺLjIwř$K(@;Dؔd.@ _xo>\XK*K;>'8fwWnʲV,뭨X$Q*RaL($H@FBb c$HBQI($"**&WqAYMU \! ∆@ʈ$B*H!$@T@*f?lV{vP*& a;8C ekXPpA _ cRI6!K,p,r Ը}O xpf !=igA/g/}G"ϓߢ)jQ]. fX"wRk( RD=7`sˇe(&AS O8O0?{WF Co3%> 1kG{ׁ!iCRv{6oHJ(uXR[&KDf0SH 7G/ 8qIqK_Yhc߂,ܐ8%1&zbv=[$q''USW[ .aJЀ+VUiAh;?[ӫ" bgsO`O`cY8 kޞ1&4A=^5+V~ ub^Yv Dp%~LD'{qGWA5l/ID;i(=r,:c*V3v~ -v/:K11%hx0S]kKh.~.J߄џ, e9"VJ_^CX/ۛ$)ͧA&S$Rv|%Ɩ"p⏝Ӝ&uO&N\+(xWfקlJv)f< `aI%2^ 넶PHMF(^;m$DĹ'&!i+UD Ֆ2<\ awVT`!G*8Tw]x9O ݙ3WVtk01}7^cA&sTrb&LX+p|4N҈,'.7g ?KWƞN4U p/g:v{uSKpkSc[iʽWv~ 6len Y8|=z't~ y_ZI^zs8qݙ9,tr Kp6?Nf>\=TW`#79 f vܭ_~ϛtd'ǚ vU ,XUKe++#M(j'ՕnQ \)JTDP#i^]D&1=Y_Gtzъy5?wbm %vjhU++f-A1$Q˥g_sZe6Z@5 Ic]MldJ%(P 𙂕]ͱZ@sڶK7}\]\\_-7 Ɲ Yb"SLҰ\;I+[bԞTP1LDXx%ri8Ll "؀Mu6 Aq{vQH&T/¦ZŅte, xq1FMnz6j:n~r<,ğbՈv2'@՚r`+Sfo^F6bƍ4/w}Cccnd獑QkCj*fYSh35l<XU[R2@xp;!XPA /*c}ڈ =@Z ծkH b%ı@mLTyDi5=pMqzkWRga/0_SS`c̬`/*pkBE !ii|Gغ ScVi0`T,C U.=V8UNO%/wڧjN;wۏo|=,m(hg2"-anTI̧y;m7ڄ3`n]ɟ 6BB%-$V9ʻ qnqR,J#''e99-ON!+iQ')Ը$.2b.*Ћu3{ee8A^ܕ.'(|baZՋu4%_xY ps4{2b.nylvu0% asS(-(=c(S;'eʀcB:< ߷W`nwԝ4[0W'nrQ୷fѻ &wP=śٻTs-$ć녟4vŞ[X#5?OvyyǓ'gP| ?kx2[akn0{9޹{qtJYJ"s|8 /2N2R {zv5!*q J`󇆲)]B%=U%%A3ƍ򎆇xs:BG: 콎a53sc)$ƊS^=Lu] A/T,)-Ko7`N{<fYkf̔k՞1n OyG3!oA\/ q1BѨM2;h6KHyc\w݇ÒԥC|c^\H{izLv:iøNp$~Q:ʌсl,z"z|,Ne i <| ~! Z)ٚcʴ+{c]Z1}|34S#=O>{^=ݻjFgϒfQ΍jvQM\4\vtn[s+qrKhVCUp] fW8f>K^WOnTwOc!{ѹd]OwOcVG? KwYO]#d>%OJɿpH R%K$RC>yS*&t ΣڃkBN0:rhLp9=iZ#9̽ZQjbhm4cc mȐ-iؗۏn8uNq{ Yp'#gs2 l1H" P8 l:NCiAì6~%M ðBݺ ;zQ$Sl03TX|RjMckrD#F)J>v1'=fA5҂5E,JW;( Qч$Ӈl47\ ܴXB $I3DYפ1 ~@C31s},5/bTZX,FQw<КB$a"Yj0dAc% }dښZX 6V4h[.kVTdyFRIFEpW9lϝccCCM >1 5!p.g֊:h n?xJ}vf-FaZ2lAw`35@g6?SÃ^:"Ga8 )3DYU79qWrt1r`!=,:],Eݿd̗0"P7v2_JQXzߟ5#7: ɺ$6=L4_rEMbxCƖm䎱y$u|G/FsxhmK.j* g靿L5׳K%Kx9/D)'~`%p! y"%Sk c뷭l hr1H1Xjfݲ Mn%$䙋hL)mb7lҕG7[5Q h@k˳Q5Ty|l8Tk\BBȔНtfjfLKQ5?_Κ"NJ5kD8ymhLsMWK3xEn᳖!|جXZ' Dz e9H;NĹ R ߔ`y5ՊeZk)=Y䃄} W\ !8%.^Ҝ=Q yI\%YsԨt=dD+pA"Zi}i#_VvNsjD 떋ARP?sVN"=;%4ܩmCħuaj?D| /0i2"> E&>Cħa2x$I ,VVq%D!e]E UM`YJ QS\[\ J1FkoB08SExJ (hr5[\[!gU_j3(:4#ON2bN *QS W&&4 yVSM@F;Q3S,ɐRB,#W?^/4]PO5z?__^qr P,'o;V sESGofaO?s+5Ѿ.K#`-S7?߽ˬ:V IXΕ6\%;|a.JF(y~,~7?64Ublki=e8bruv;V3\^,D4 L`I^+zfRfy,iJƒIKeDjI$l5Ғ,ab=jIQ`1-ٴcJ1zr;Rvoqz;Ø3 ;XhW!MtHUHSgC9E5E!ᢚkJQh9ĊR%דoq=)yqRto|EϮV_h:?;W֗2=N4W!4ľ!ԭ)_;79C9[,QK6_fS3".Gޱl\˦fK$4F CU^ڜW2H઱џp: 䭱~^sF?OD/7kknXŗݜ ~QRJmv%q`f ""DPHTٖzoETw$*mrIlטd'>v.yN; tW {m.&0rut/YURV- U:c?>h|ȝa_"7YlA,(`,bV/V&QZIcbܸ_e[ƭ]WV: xl-~@Lzj{?r[(}~>_l̠w`*yqr`aU8ȴ Euc%^nYٰ|p˭NCd7oaڙ/kr0R^ w<g')~v՗kDp^py=&o>j^2IQWt&s sP=ębĕo"#ͦ1iU[ Iw.$L7 Q2ȗ~ !8A>ΆHc7Ɔ % 23F[NJpDV2QURobίa6?e\/:oCVyh.:NnVۥjjtlcmn 5ۭt8PcRrd(1IƌY(Q^Vsʌ%QEo%:z xRRӼ0JBJØ^Ø.[rm6Vq``TMdu2g[g i[co`a #g<죣쏛wf|x֓k_t5Yw ="3ߙ,"~S%otzc"{JjտfބnhFN;OdnwH,-桥iE$'e &3nAmTO\v (B=]6#ObU"blgރHS^PZ^0|1J;1BWBݐ"w"f mPq%D'oUP펰`LQdlb,E Y43% V#gf%fmnhbN)z6 P k $%uHHƚ0Kb9"{XV ib%Vlkb7pVG^v1_5 nito=XI!+ӭW qZS`\iAl".OJ#l?EGAQmpD%0\Ʈ\kBӕ'mZ=&] w<OPi' &`E{:jdMk.31)ɱN6U;WHٙE1!0oui>AϭÊcLN_2TO\JpT z岢u0L8';'~v`[-h/Peulc(FEX)M;~sN \;X>@%X}qۿV7Е *y #Mr.SQH孎cEL1C^i$('*=^b +MpS:ȧЬ̿B,Wϣ$) DibE`4a%W\30%`^1QǛ^CB[4%z:*TFp.%2O,4ϵl'j=D;rY!D8!A! D-LSA xbdZ9,^'N(c&\IRO:S受fNQn}&L@g9e>]\uqm_}:׼.rd%,EyîV5&RQkÞ~sˌ.}Q-]E0 77 qI3Rj L!-#'L$18f+ed < v -qJ8N%틅5xp[[XXu&YUDn90j41*0F="cOƛD Ct©a@.RBd3*&H"1 W_P,(|PyKͤg3I &݌MKlQ%3hYYK(OgQ rS!c9lcjXSR̮&u+MqXn?M]BCmۊۻ%g=Uxw:GxAɤYbF x`3+ŮXL %_Fv谖gUƵƚ_AC0Lig۶7\ cGn'OrfK1,~PqD)QuD 2Q<ݠYBlCI3KTzx,Y!Hj"bJbN#EB E |a#7lISjJѣG i† a5Rr`3[;YZr xgd[j[C[殒\m]WRj:sb&paPy!$'3kM7$f*-FiŒ wfn/˳ڏvonZԠZ BAGrA<:Ly¡@LSm=M4gHplZ޺ZZbb.F~Jݯ @`b)1pA̻@IQR"?pZZblшG֑:G:Uredým|3\D7ذjl8}kRHI#dn@|NZsHf2D5mQ\-GlskۗZdĴuh.z=<x),%Bȕox. WR5mDțs/{2dNHQx*?/Of Jb_{WhSrk2q>3yyX4z}6ϋC)jmP;ۈIx  d=!,3X$T=3Dx+<("%Nhc<ՌJ4j9bLݠ"Y l)#/ȠsRhe Ibb)62%+Q;G gq,Np%E\PhL; e]$?yB8RKĸ;ac"ѴdqDL$R\N+hmј%x Ѣ\,y<"a?OʣRI,i_JbB $I.$S0/`B"y`*Nk8Pּ-]ɓ%OQu6α=ȗԠ]P%CJ](_o7_=ڠ42#O@1SE$G6sI6$ TKɦD(+^< 4GA%-R\UkZ+ w h}L-n~I,0iq3ߝx PTїkr v+{D`AԱcY| .lPe[ VNfQ~9NJ)#mT,Կݥvgt1ȵgmDa9 3 jT/0P"r]n妦u޼O;f>^o8=q3z2v8rt4ƤDHؓRU*o7!>2mZ d8?FyyT8n?UsdMZogU6aס^I|B$D mr}QHy}Y WzK>Y]l2I:7xKT#V8 esh$)u4Wm\#*4u fIO?z}W%lKrJkWOGUGC)]xjƎtYm>|6-иVf_j ,u`.tGQ d:.pMÆ)9PZ}Ri@4ԙ LSEa1ٹx  [د$F7OuƠGiHT[O'y~ɇD2vLt^v)MN/}LdBI˔j8fIDv` :[CB0hD}d+ްwyMoݾ#4<_v (H!h%<#X uKe#+#M(()yE~ˮL!*91gJz#_2%d}Zjk)+UK{}ȔTU˷RO6PRdGF _ {i)FZR#)uZfg9lJSB}$Wj"ؤ0ch!Y#^6>k"v1\fEŗ뛛4| 肙d8lc[[u+ckXP%4\3O}ǖ\[̇ imY [ {jPRSV5LOVYЂa\<)c%T̒)CU>~VTV!7-ZQ0^Pn ije :XuZ-jͤ0R ):H)'\B5 ޴dJV זּ V >ֹn, ռ` @+J(T)%XS^j( _ܗx?_isoDwu[&?AB_ODD?\<kuu<]@xEXU#7#Xa6$Ϟ഍2f d”΅@0=䫕r(J*5g2'I"uADHJ]аY#΀f$ǞeHr$]T.A@{?oXv1MWbZV;?`ARzd5x9T;? B(QW9?T\xI5K!$С7^Us^1 )jjLܭ|悞ߥ?>w ?ߟ\YQ^7ٚjdMDٟAi<kSyp6*8G-Ǧͬoy]}?IhPrs"s/![%ՠe3Falܤ}%JNҙA`5)á*B=i>2b &u#Le:⥥;}:NCB<i2 G@Sf?pс$ | 1TVZz5"Rx^veٵe^]{ktoDdCJY Rx#+5"(ḇ8xP:Qュ)Ϭot V,ӕ22P$SyJ'I/\,\8?Jh!,)U&hC,nԜm\ F#c)3 bڄLNQ M23Z=$O%cqAޑKş- A" HTdIgHI!hf#&dƝ4Ն-e\FY^2* y,XmL0ˊZ3deH3KRҋJ!8TfUԁ" +!Cĥ"5W_=vAiG6x b6ôڬ܀b+JbZk-*04_cUguI5M 1`|sN>/Zi}˝LD49t"g* yOR^M:QhU+([8@ْC)`~JiVaT{9n\^]x7[|O'h8lD U:{ipLk){LV04 ]X;wlwPpۙ N8TBlL?[E\ Ƹf4 R͎9^rmYC=*.Fe6N<"W@rauǔg{xM~̽Nr§ qd%֣%dR155(I{ Pb+K֬=XxPiQ;1{eYzzO%7Eؕ=~K:n;g9%Ҧ=ȮGxUwVw:Q]'{oGlZ [q(1dKD`GM^EVHF䃱ꝝ@oaR/4Slthb]XE*\łIaYKLL;Mp"w@BÕ!PLjuNb{o{ZsmUŒ1΀gUl[}g/|en|Xݎ<.K_w/e*:-_lNkPۖр-.+7搛I<9}:PAre yl`݅aK>LӞ⒍U UM0M R5`6&k5ĔQ_|V[䳪=U6 qpUt꽊UtR8b] iYt04VF8 "3fl<7os`Qlk*Oqs&r \md!F~O q'?ztG?=>a,4Ȇl7 cFd$Uo2d ,.P9Y áR2o3ˤOquVl~"ׁTjz~CaֶĎwalcPӓYxMZⅢӽ`TYǓ4kԶl.wν{R0p҂tGz 9E`%i|i5=lT#Tgbz ᚛aĺ$ Mo 8뽯2ZR}r RUgjV)VSk*4mIBAY]j5k"alA\a1i˼ kT"ؑ92P82#9 ŵ H&il*"%#KZ"O{*rv-ՁZ+n1&P͖<..:/R,J(e\trn(KH,50G9H;5ܥTƀoQDNu|mL8zw&[y{D$݋u'8'1 !)| ^ĻD˫.4v}l D1JDY[h ȱ eme$z~[,9Jj&zQNϵ‹=]nVl9񙁬B!oJŹZu r})1ǹ }(cl:•s*N1dCP ͑ف A-=k̐fLצ1w!g4?(O49Es4#%OA#骳7R)'s;j6L@EV~izNAGOÄOPTdr$ 9Y3Z NN$P 8ݛCHħV{4v#Shň Z۲Vjmk&ldeJ*cxՕk3Ȗ@9w!6 Cc~^<*kcȠI8L )yÀ yI.A6qPL#M-g6ĜL$:eI"WY`ƋUFWpYZӉܘarF9HQ:5ļ$B1JgZiʀI>jGDgX) 3K'kͼI;ss:2y_ `,Rב,EYN_Jz1DI(0 $wւlhI?b)&sra|xY~?/l\$ 3,l&ӓ܏c,e0&% BZ|oI/ 3H~6o?!+#0޼B\z?HgM^߁o_'@>#>fr}w=zs85[,|D|C &Ьmd&J2n1od>u1 d7gg H81u0tAb2Ӥ801j 8*" lȺl8 ɛ`͗06f/] \y̲=v!3cTWٳ@5pOG,P# 7ʢ.@R. nC 'cH;8}̨YYA* O-G|o?=^ ZBmӇH'[UoeJ̌)ahNA )ZE okV65 I n>)Ql`@&X;@hz0D7W}O=PQ}pP{q$BK` ~OcI.*ҦE\.OGܙ%B6}yꪝJ5*0p >}ᓽ?I@:%M)$OCKUjc@aikcH8>dh߮ ߦeoU.:ru|X[xȋ'SZ!ysIۘCmbjԄT..lmD' $--߮n1 |xY%-}29yA/uIQ|6ίye;JmFvgNҁmf_D阁w{ɭ+2$|a :qd3:Yg;ӦJO:sR,px&DG_;a7th%'Ǻ*1os'"tĒ5(6gb#!D9grR.'47LFHjZ\n|6w^¾@MO43DCϝ| `_Y"L 8 >Ml TT9P@)m%m.(훨+q:4R,v!xL!2pW֒sDN:% $ۊd]ٽi&/hƇ8+W~iB_,bUa ΝX.&M:<:cWZgt@er?Od><0GJwUe+Tw2^[DrdjJd@KW)bd{'ռKHk֫0&Gl$ffrG69!RL@vf7==0#y2@]xi5lbA\ug.o5 #FJVȎNM2? 6^U#(,m=|qwjnˏN?~ q?1Կٖ7pOWCX߳v+ܭe*zg+%Yˋ p?}}׾>k_굯k_ 닫&X&n/FmH^cB謅9}'}=k}?;jEkFs͙yy(UoZr<:by~vFw"FᦊA -^z!e -DԾL n[[ʬGa7l3 :zMa ,iR,^xJH1Q ״z*4-@*r%ekptxǘO|{7 lL!Ct1G56܎ϣ61ݑfЀ<\8>ˑ3W,ȷ^=_.??_Pq+QKSlO 4hWk.g˨CF~^+}Nr3pg>`wĸI*5~m׷y:T)9cFw>|Vnߔ]I o}{!|#%Q|U٣χuAգ] dФN\.%v')wIL0e6z8Ek>aƬ_qCΖ>NR6f? v=IJ}91[Ϧ}R;{Ċ_l "?,M0RVOR=xd父-I ЊM8Rrs9e+2m$eFٜ)8TC/}3$9Du1CΪ>Ȭ~&ndʭ0BP!&9#%1O13' &Mi<)8o_mvO /`P'jb9 =4CQB<t|s)wn>"C/Ja)1 lSv>9|5Bq|7moY鉹|mrDN4/l1^8/NxX#3bxf4OmA*U9}L;^fn04I1 P؝DpRR MCС : fSʭO ,(Kj%T 2JC9ذg-vh) wx4Ea' -i:%Z8K֬؄tWL=R1 e" z*P,:?'Z2O |(c$%b#3 e$[ )g)'%%c;`}՝þFWS+p5WS+p5 \}M"]0R@Zju4&N P!ɂʳ|EeΞ>c~B\s:Ek4OS4O.ɱqF Y.#]~)vK_W( *UQtZqTIV﫜bz噀.xBYggGh7ԂM70 Rv.iɏҩr8S]Ѭ[iSv|窿F65 *kt&R:TctIbJ7km}Xq!F{TOo(&!Z&H6b *k٭1YDB8,IZ9JQ:Xyq^"ǂTw{o_IƦ\0{v^ 0IUIxR&KeLϣF%֚_,eF@Qy6Q!rksU&!ZjM&9ɮ` B"Rqǣc#j4~;6!,Ek>b4IDgJg *زce#$ H%6BeڂS#DzT‘.$c/QR:g6j+b^@0 rIH(ɼ(73%Jd%k5P+ խvY$,S*(})2' A YXGF/kKۏUS<[Ơ: XFA޾ nMR J2EE Hu/S`e.P\E#GBi_Ѳk]Lq  iy]1^ydAP@˚})8(vQ_At&oޜ~s}⧹yS]7#bi/B{ na@IFb̬ BHQҥtcXSHʃ'gQ~˳Aʒ6^Rmy8OrfnkIO(ւJ< ČS| }ruR<? x )2Q Y7y!B$v"Lf$~  _\,K1vxjbGV++Xi11| ^# B4lyU<{(xP&ؔbwSf$VU/'tQx,6ZK< <39LNe5k8dGYwYőwqϮc "ρBԀ ٣]FjJ봐}}ЬDZkΊJKAQy~t'Lv ֜\hṉLΏdmg~h՗lJH>SURvdl1̀{̥$hsI-ͩxV(2ݦ5E?t*=@&kP@t+rdsҽ*djg ֳ9zrd` oFk؅C6|uq 2ڢAM s%M}[Y`|M%Qu4=u,xnWg؜qVv Oc>h VTAI 2ёѫ8@EYޱb8ElF;>yB2 >g,(D0S5_u~ܑ,y'Dܰ&< O<JQ( fk:x>I $Eh빻,NOS\Lx)tXY[,&IL+ᓢ" T9Q#Qy&Jib6gg(h %(T0) ЎϜtТʶ: R5>&1 3Doki3@G!ŜFТl`H8i@1- .ِLp>tUSO^ o撽Gc c#a@ڄ 5wu?ypkO$-TB,boϋy]=Q[#w0Am(& n}C:)8՝FAզb_l\2=m ;pI ;->FHӻcƏL9g.bd=F}(z/Vu7䷋?,q7=2{5D,T6QIaΦ&{Eq$ZJjo[]o/wo%u62z"3>boӄl0o &4)|shZ={aZ/m˫w¨>od-_o-:o7}|[XpoF<ԭ4݆[yK6nGEa_>ңj<#f(}JߜmLgA8F`2|պ 6{M;(!>uR:4 .]㝨:yuϕ~fqOOlͿ5Ey˟%H;ؑ*ĵ]k?𜶓clfDlkmo6 EzD套5^-s1 v/\nWEj{޵qc"ld~ @2 Ia4XؚQz$3}ɒUTI#v*schT'=bꫨ&񍋈W^ՃEڰt|lxގbuVj#PNEg+o-;o `(Jf}V&7 * 3JoP[ז\{\^rGDrOj/KFR/:& 7ږ/Ry(.GԴ  M6Vqq9UPd}P:ST%bs|6>Lnϊ>+},^r{:`,˿洲䳘ھq\ -i(=@0o$`#b<\f#\+{Ϙ塩E +.j'ML ݏ>^7=y;E cL(|j=ST~=GO^oW{}Wx=1.ӧIϏGcoyxjh?FT ɫ_Gȯ ĄihneWP[ZPW7߽M?)~Qćۑ dz:JrFҳ;QPo kvFna %Τg&JQ/w3߆$+уebKы.D)^Ձ^$/@GW Tpi.Y^孛< :aV R~w3;G]ztTOC;7* zvN:J߽:x8x٧nnt *v1 KxXlΨj='sՕ wZ خY֑7#if/bY-v t͛Gp>{h&g]]'|qN5Owt9ő*QSxVxZ_aݽ_mwaaԓ}XN-fhat 5(()Ox+LgWm\yN ǐ,'Ȥb8s8$펐m+StH=5ZWbgsrTN:G%?|)!;JjO'1;| {vcE%9X|9Rn $+ll)iXņҖw9Q2im╮c{"51eꇪqJgEu0<ˮ0Bn 9Vyp9W-k!!Ce^)sӟ7h]j7eƒa.ۥKA5eT.hi+(M/#Ӂl(ʡiq95$F2GHi8w_)ڽ宲X'J%#K@PqR#s7+DOacX8ǁ|~e.]Ż(weY? F;e M6yN X_3Df:5=] |"6a9xq>JH?!j ~yt|,a"E` rffJ*A"75Ji Nx-$38l-҇Q aLjf5`] #Pd?*N[*N.)v+JOQVXdwR+u!}Dpg UH^1 HZcHuV?qb법uFq<iTB UUҺUVJ^e/`+a2eJY4֎Jb%s౰FKH9#2@$\<p%ho5ɫB/]FZD0m?&q\Iiq`gX3cc s`2y*U~MnzU#3-wH]LKH=du)b[gŽ*nxbyد۰= vu@M :R~GN(=v%7ADbb*RQ25_I]VrPќfEs*9cZeVR*.Dc8&( tjj&V@Rpt+ZAc%u#*. Ts=Ykptu&h$OEo0KT`[qFrr`WU6:or.0:BR=֧汷MI\P2 G凞Xh]Ԕ83eE"&f*^KO2,{F#]%#Xe"keZ\,yRe"5 Jwd1EXP,\).q{y&$GO1g̍Q]%4.Kh(&2}?<pp^@<ݨAH\uY^1d695Q3&3*sQ!ȱ,l'҃Jw"WNT𖍉JmH4xR!o [3Qf>m20/9"H= Gю]bI^l&R.XF =УpUh. S#H )HYJ{{%80gXr 'i}0H8C-2L9C\[me"w3;raɨ#S9n٪B jN+5RwVϵvMҨ,"`DSdeBxR{P@ An  ְ%ؒD,m+5[+ O1@P+{^;^ _)LA!ƈuC ;V;i;yΝ;r^HNJ̩X`Fu pON5k9H*" ª6Yn9}˯gFtp* Hi ƾtR/z 4[%S.,c?M&Ɋ  =cv~zsgt|MAc~2?oz[ ?AQ}:᫋~77njJ9 ;4񲳝H(˰#%!{=>c 99"->0PFi4P ||l=!!(Gja_uYp$gy eK^m{N5FJ)杋KDF1\6wmu$ն2*u{i0-m~`|30s2Ur)q& @cbM5,R^$fZ rܨv k #h[uPg}.8kO񢝻eRF347 yŚ)cX \X4Amh)Y %V!t|WlA2ڀ Х<˱c(-'( k`DE1BlBkhdSvXUz"-#8|m )G;Lv ]ч0N#AT!3e9, 3s W7\VT6j|~mÁ5 ³?^r%}Ae8mh+5BVVAU'(R[.]mM&Hc/_{w);gIZvp7QO2b 7$AîL`ʵDÿ }x$h$ X;$,|r:v&V"R7/kC \`yЎVPsIt(2AL ""ςNx c͝#~ C *a;J\k>(fFw L:;/RTp%`naxFڌxfPn EU gVPR I%AڌT@f3RaC4>GJO %Υ0ND'yؕ95a2q2;O H;]تA@'(l1_64ANUsm0^Fug4Wu[SUY#8 Ajsnh3x&mIj-rؼ t_j$Q \IQAos%r}s}ZwNn4+yQ-n:t'jyj<5T[_ijH#[U{t'IxIS*v%T~ݲU JaSuDžGBPKJOw1уt7xq=s|:h޺gfP _)@yyQNQ[|+Qt mpB<27Y{?UP%tOOGB/ݿO S@WLk 񲀰)ԗBa}a~q<D~w*Ø8ۯ+mزEЧ Y/t`F:ɗɃQmMlI#Jt-j1emHNvT[wuLۜCI `HL_+ 1&;&n?|*^8 3׻׹ Gsh1h{>^G45hϙp1M&Qkʁ[*'7aTu=M1JB$(6 +[&]h"*5'aL(f"G)fb$Rz5Aa) 1-Eexو;p6Z.^J1L{uel*Z, }8K.笵5.5K01?l&$H;h<'0&ۃi2`pi*_lx ( ka9  1O% J! 55Thre4>HԱƙ1ihcDy@R G $T|icqtG*! 4,AC@* g/_xh\%]RIwN2|L1`s0OÖxT`c<@o)pE;kg c@ "Hp") F o N1J(+!^s2[0"qQC=y QLnߌ&qeɇmo*p5H5DKzGt$߾NY[r~ l\~Y|EY Alշaߚu4L;è3CׯÛJ&6M׿5u0Tdd+D_6A J-erP>8< +[QUUW{q+y_*+yP"Y}{2"pm%{z5y/Qn }v[1[&tZXjr##rg364b{fp?I~؏q:K_zz蕽S!B]ihilz:$ AʲKhĸ@.jpq I)ʍ 㨓Kֆv%(QFY@֒"Ɗ\)Ủ*RvJcg$ҩgJh߼\CNPD;[1"b4 ,4959-XYF5IL8e.ksUxMݦ +b9}ZVip[G꣒#UhkG@)՜>ҝ>RzcgQAj 5A"TcRP>(kLO9<AqSXA-G)ʧV@*4B.Ɵ+ퟔO*qvnJĥW3*fJUe_ ]8~}riyvJ_i_t$N\5F)ltI4J,x{'(Ĥ}tsWX:]:s( .)}?.,;*~YpݓV?h5?Mck ,#cwd Jc?)X~*]ߤJ)C)6'qQl!FƉٖu7Aő ٸbKOiB+-3v`7NJ䛳D7`%hn)bbW]rKt6.I99u[ͫũR.)fMl ]+ @k\VD &?d)ȯn3\ק/4]4$wgY瀢y++J+e7$.o$GY}vŢ}K4L/4n.Z=Ӆl%KF|;~ڻo:g;3oy%#^2^mdsguᏓM;yՑfxeenټgN4%fJqfi98w#k,@3dY3Ep_o7“m+Smym>"W`TMƴen g'm&!Lt?vp-kl-<#DYxqXsaM<)t{ϝh?e4Zn֞:?wkoBmR)xfZbN֟:;sodIh.f0r-c5[|{3QFI"i[> m˷6B%aSg,}pBh؉04 ㄷ>B_!, YSkC& "rt:_|49wk: ]T:Γ/Y# qRD?N(0j7gtQ'  gIQ+ U{6U=NhVSjChY䴻B@2Fu]N @.p>NIƙN%kKD5.toMlƾ9$.&-r&-i$m/R=;yލ}PQ: 2P/a2H|*LӹLDz2pP= e!0-Ů2 r'4;Y6Dcv>NNY(DȶFV %2?#zυ8UY!p.*ΨN`.ƎR^ؐv3"땒.y]V䚻E-ɾH:WA9e_aVy=Ua d uRY6ʩѳ|Eљyt8qGD&ɡ`U2zK8m;.#J]17Q1`K6 K䇟sI$b9^"/BQR. )hU/%7gNIz;jfU3NXt>Y.pD5yg\'*e6dcXBu7gfN9浬U^ 롋sQ2ucJ1T=C3G /ba-@}7;6l :vMٸw;>8}'3>~4.'?ioNhQj*[q&0sI$ fpkAN$ /;1@Av>⥿GKOw1D =1MZ'z$I& eLhܸr0N݇pMd&^ԝ߾aO_$ ?4ߖ߼7S1C sVP4U9%E XH<+p` ) a1+7H,*[ɬ\Zw}Hi8L@1:>*9O⚶vnU򥂁jSH/+TRM~MVh)y"$ɱT+e^A&@X#f@-R A2 ݰ:"qF?sXqo} 25 >*0FD%")$F oTԹ a]#6cYL~ԉK'5CoG/8x tڙ;ϝ0 '|d++vzjgS "+,,8aÖxh0U`@<.kNve&l5O&2$?O(Wz8?_y;⾔i2]4o5x94i6ZQR-X x/4TP!kSHOX+RsV7X )i2 ha9/q5j=ZqJI%&hp.4i0f! *%ssnQ̥f%&\o8-kB x!aAHDY sy0I"-W &RgEBpom~`bkaG?NJJsŃ?P|t5ūQVink%Vϖ.2(A}UAc6,޿`1g쥧TP&Kkk=(qQ- C߀r@@skzXCbi$,Gr kF=x{* c= UP.1=E`TB Ԃ{{1p&!dejp_:CP8ڒY{ri]u!MavKzbw0~y?NI}_@Eԍ;O[7$$4,鍒/@Vx`Q >j^  ظz  !gY{:+\綤;V{>b4IO$$8-z%)鐢%1 [ɳޙٙG'٫ɚ%^@!/Qmm JXFQRhdA3`0r@@"<ks$dVt=)D.8'dA+\ i1(< >iB'B XܦaS-K !0ty @ZG> 9[CQ'A.PSHAbDA)uIp$T$SH@a0Y ##0ꂷ'Q C=yD#d(& UHT ՑzlLdD =ųs?{Pl|]tOS߿)TAwS=:%r$)mF&]5+ c>:ÛK{wS~wg`hRpFh8oEdEd~B^B @(1\1F_4N˛: L+rtU^b #Ic?"IW}5X J6BFbjl^PC!%Ǜ9&vQcbbx,)M }bZBGdo+kX^;cl:ѮXh7b-s\2]|:s*l,Y~K6qCVkWԜVZg XBZ d訓GU[[#wnvwZl F5]k NdC9PGڝ$|IVopKpk"Bj"Ella^"ߕ8mvQИ\kmyjjIbu"+cz&coJTs=!ۆJdK3uvw8n+ 5(F }|kwmlQҶ,DΏq` ]Z-Rw>5Ҡ炬d{D4zGs?맛 XazW ~{~Cr?Ⱦ89C!vtzu'ıǗu?q] V|O^i歗蝈ԧ2yOL2 DJc-M&}s=םۙɜ6 ;q+zp?yv 5yDA1 \I3̏%k{4Fz,+{m?QM)+`n[Q9CHJv2I!.2;c$wCV*V*aWf7; \DZ p~"[%ڝ}8ۅߝ-T)Е0T\$ɎW|?OIcQK$_ 琺JPX҂h9_!.u% [_$%Ƨ JU7v"QZr.9iLrBs, yiiIX-p&T[)M sÊ s_2땬'ѕG W%°y(~Ut^"P\#XzӐu$/Qad B~Z˹YX`  Njxca ;{Bx {F^a+=^D^3Wĝ{3U Kɯ>fXE@^)t%P{U;%|UBU V[sԻ8 ˼ӫ_7zVĹ'x~NWZg,LQw>U}p%/^Wk]ܑaZcON oxBBWtmt$ J/f;ס:y^ /}B`DH+Dͥ*_2uIkC0D!5f5_0?Ӓˆ&)[Y8,T5 k_vG=·U/ תCX-JzG4Jj}CE@t@Q y+Az""zp MNRa?a,\ I[Oc܈JpU5t#ʅ*JۖͯJؼ =CrU:ABMmM?7Y'#M:"onOw`m|oFOA/ŻO0[Cnޜ1Tv?{`4%no]ڝzK;qp=*pN. My! hݹz;CXOabӀOߔ?sNGԄ*#Ͽ/ P,U![OLf ị=6hvi2o\}?aDDC$BƷ_!I}N1(;xm>z#l4Uhl i9`xH9QU  l8<IѦ`)H/s@{ #ڐWp~ĉy, \ hea,WHڞuU.MV+]Q+ )y IY,dBrVs˒1 &Vġ_i-z^4u6R*`my*ֳa1>jAҳhEm 7P\hzE)U %Q3 )zHD)b::~(hQҘЇS^Ы zeT;zuJԀWZ#Aqє䏵r?7]Wͼ:0Þ:ʨӛ .L+FY0o 8GqFFH]Kc.56Dv.˗ - b/_r\k/_/OVy~\ـȪg;NAq)< (5BJ#MTrXKv:F7*£e;۩-0vjg;8Ell']xĨ{cmnJϻ<ܓ5,ͥ|'[!Pw"ؿ_ b >J+O\ѝv_.F `]ΆB`M@CwR2Ο fv>.3n%+}@m"u4.H4x~nyq0~~4_[m1Fs"aQ)M` A<=9 qM$ERTtp67lPHr)%}f y9jT,A%,պ, n0yt+Ͻ^JթCZC4in֒x=1=df}fd^po2O`#lC v.;+A$ %loo1)[MZz׮.뼿8 ~ߙ=O{yܞ[8.DIέ/q*;樤ZO#s@QS3%!?v~O.z{;Fڢq?s_<8qCg/ذuDWZF9Bsn+Bn^wrW <..i>#a6q8L5}৩I;Ri iެqnq R?G;yyCj@ݵ-9DY[D}q Mqk?hP:M:íz$%Ck~"*dU̪<<# TGt\{wB5%e3O8)C!JT4nD|X `0ieH"(AOh2)B %bKaLA8MŸ9$'ՑX=l#cp\:Pk+)A ,=/2;:.gSW&VWN]6o/4D)T<e8g2:_YϜ왥xj1<#:?|1mN_x1ʩ.~0,5MnQŒ8M&=z qRa,# y"ZK4xl@&X4(klbݎ!O =uhuۅp!SuN8n5iP$:5Ⱥ/ DV:ݺEغ@B^n-SMIlvA݅q#`A /? l \tKUY^F Mg849?VS e*B妧Jbt%:Sn6QbLkҙzD-(#_rnGCw|/\~rW+K?Jq;1c 93J $(.()(0NSi燌ͷSn&hsz'.;ҋ%.3JK!K;( ^|I: 3zK"o|!_t!`^p]z.g7h՜^r+mʳ4ҐVwf5czs~a= Uhp`!XF $)Wh&1PzJfŦBL_ŷPcK]{~n/x^*gO %>2DiD[(@<KG݅jIGCZgcܺ^Yԏ`Kr#"˪>w]iQ5j5hӔ7B$h,"!^29t :n!sz2}IbR9|Z?]k[-A|q=yWW֟#YQ /˽N-mNGĻ,% 9s:kVy![K@GKzw.r4MO! gd4+I@ 2|,F}',lC R<-!G/@ [ ]!c:8ƛlk66Z2T:Ȅis/X hLT$}!Z@'gb."Mc95t][^bwη]Y5ۻwG>NEB_gQc䪗(l>IW9Zb6JP򐿏tר# k^ӇBN1ڲAUZYw8yh뫪՞Hh`I-Sb,Y/bdsh`{gz"Nd.I5'3V +KTD5 ,|B [(ZA#"Ap#&\۷1: u\֛?9g:WDUeF+S>}u pO>n%( yVQ/Q0hv'Ӣ+0|K|^g3KqC?}`DnLZPJX%8 |gOO@p~\׵j~nLVZ$uFNkWV:OSjJoBR1`"h'} PbЛ 5 qr卹%oY .nԟ_yQJK9#aT%emyǢM+HknڼtT=rɭlϽmQ5n qFT;Z3%6;Ƚum_Zwڣ>FWnZ2Qa~$ #w:9h;[[e։$5k#٥8]GKVb Nblua cJB-6TUŲ tǧ68ݮ@ 3K8̒P"phO &* _M_pU6n^ wg}J<4&t79|agϲ=v,ٳ *M`'>vH pZ;"HDѐZ(kA[4S#t43񘁶\y\u@oaϲ>,볺^)vRVਣDcleg,;g>;uH pmSs1tD[$$H,K:LKeH>J_lU'4F֪7.owGpXU,DJF", Ѳ?,:H٫id׵$o$dKgYuJ00iK2(KM?@pNh$8G(+]D"R'' ֮ D'$s&Ɲ/&߽r6%C8` y 6PrcEcZkGs HUP%>X2-A!R54V\YGAPV#JEL )!91iLMh )f=7`a˖zkI w8DD9/h9DP! McfHa eҷ+|Flʆoqۓ|EC+M+ A}P,3WQZƊkQ@샇v11*؞ZI9Mo lݛ%z+xwRһX j?my_bc"2=˦0K*:{m+ }=t 7]RR (@IGӺE {:Uzp\5[.]!9 ]k#!PIӽ&{ܢ ?A6&H"LPM"@\¥Rcl?ؽaX{'t+NoyǑ묧wo݌#sgem34Nh#騮'uS!'AĦs,7v+^ΖFQsP&s {Or9;{/e( Fmr`:P4.O]lʖ2UFZ+,OPxeEXgy%~Y}+\P/SkR{osA#Lr[cB V#\J_TT_ϖy* vf|&93&H^js] Ks9dt &DAEgLeӄXí1dlt 7PSX1Q~ D ZcYqI+΅Q# &ɋb̟/l@j_fTDpHTo_EᴐFf( UkIN-Cԋ1Ř jVw6ߎoOog`pzw{ R2D/x Qf%sz[ S `A>#]A @%g1$fs+Sʶ!膯 <-ɻ/ov!ï8m3ӖQ v修_׳zדd 5:ؼLj2gRI f.V5?=:kc}@e85GgO(jBYne`+zTV}qv5{[N{bЧҖ oνgmi7!iw/||oѿ Ij=o>J z%91|?޽Ct[7pQ.Bk7kWsC޳Q_8N/tvurqow˳ỵsbꊈ3A̋y۲ݐl+"t{Lec$^ЬNN8k]rsxvMe}vww 5Sdu;`A{3:0S˼xC̻=}{Y~Lwwb c(Z]}rdʿ\ ;嚿A ~,}V[.EK+J@@-q}yo\ɾKv. PBi:}u$UM󅏣_"9bo;Um(Nnn#oǾpޜIJGgrEDJ^AJB )Wq:wE3gD b4Y@ǕLwWpg ubQrUA]>urWࣦ.2XGE6!4zq3KmXkS-ި$vYmիJ)Ξ8> *W-.?gYH+ >"Q"qff ]~Tc6CjIdɠ #VU+3*3 /殘EXXlN"FݞUW{A+z06~;w= I5Ѹ={1ݶKkcn0/ ZzXi??GE[(U!DZфADHIDX!!B`9d[GD ܮεOD jz[^*N-&ݻfT;G6nWjםa{ Bp}pq¿G/uã}WCv1$N]~oTKG?oԾ6 n^w[Wfl! &urf:Yn,MdPawvV1Z=TOώ/Nϳҡ_/=>A'yI@*SNv}1""Ap$D i` ǜL«<'\>;=8΅l&`'{Mu:aB1 $T&':H p̐P"ĠRE! "5i o"5co.ޜc[EZр5jO??o:CcEO:םN4Kr??kk*݃{;Op׾>١4I׽k7f߽46=컎wcR Co쿿 ݷ& Tҿ;7 h譙<% <dиXpvaם-KC |ln KW!kIT32 UgtdXxp ;Ʊ5''0 ; W&N)p||=8v;}:!>}O'S:G?Ø8v_fSR$L:/ToA 2HGg! 5Ix~/< c`g;}>hL>>:er41/cwKK2%!N +]_2 qSK%R?rR_>|֙zR.̑KGdsҾw)xi>7F;ݢH=d<=:N .':{eu^ä_Jd!^Į68 a|pxM,bqL H+f ' (Q@ư[d9q~ڈrR7W{fǏ1r}DOĪ'Is*m -8ߕ\2R4E8xP(u1.{/es>O ȦANC8[av#Iw*10S'y^BTj}ąov}=MmNlhcg8OYV˾#Z#)&ۍWԈ&1%ߴGcK#0a-0f#Olh-̄PDK8oJJU%cNidLʰ,%L4S #oQrv1QF0"9^ܙ,WK8H9+ogrKAeY,S]߰ $&I6%4X&a$ac-qgȕ<{yb$b;cYvgXbg.^g! u1do\.=9KQUƸv;7!٫u!(*N>[t 7"W|wN≔$[ctk12P^4E;7w~GG}vT(gF3LQa&4XJCD(Hi"1a:^Q7g)o1!4z8H'rKmB'ҒIQ&˽1߱R0oaʒ0K"FYR.#B" BPsW .%iWaj.gS"<ѰDI۔8!f1-f1Iq‘#2*O2Ӛd.Z֬_H@|Xy$R>mg?5aB\& 5J@D(i(:c-"n-@FXHw]GI[&(k.ߢ4[B5#aG-H34ݢx@p'\-JGR,dxx-T6R.Ӗ0@/8.#-xij Ȟ5Ƒݒ$& cLBd \#@# "^'*&Vʜ)Se^#f&4b$ c7y:f  LjJbo. kjqc (ؖH!L]TlZTi?I>Şȉ#1+`΄ZTlLsO.:)\S:Ot'tǟOrpd3X11*n.o9qE&YρrEYyw# ח,/twsFwiQMWvMaovSD=S LAS[`BXT_|$P9GsҐ琧,9ݝ2):t2r%衭l>^G. ֆZ*-3<ԅSkC=O3g>>ԇi"dHUFul%i:\[pV` I.OPV9 .L0|[~0$YJތ0D)dBYO2?ozŹSgݛN/}F:/Uh"np5Azy1lrWm'!oQw6jz{Ӎ CL0 >7Sccg)<{h5g Wd70+mk'mQdBpNfel.`{m05 ~T< S.+.2ry’t"2E0zŤچ( |Dy֫ͅSdm֭OVp-$So}uS {֭( |DySԞMͺ?iʐ.˔s;2C i╱{Guԟ+D'Hkk&9ؤW7@bc0`-p ADhRdꖯK7r?S-•\q@ɷu7mRY`4Fs4f|Lֺ1P{h)MjixsymCrčFUڙ %CbjgkK!dȒQX=,y*ĔLG 0$hfû,g(gK8.Ū%^Kzz18Y>w.9~RRv*O6&EYW-zbHtpcfd\n'<$ ];AH2QH9"lX Uh M$J!.T% 0:fzK6!%KΠ^a]1Jr`PdMᕈU\鯛WhJ˗QHq54' ņݴ˙6ՒEH#r0{}R(]yZ -!Vvq=p>fӫe((%FBX8T8I $< @y@LhƓab0Ib1O`3HBX3u j0RLR/pKHR!K`l$c$K5XETY]_"l1V5Q· - Q Np8Du5P,`ؖq_#VejOilL5O%^Q2U_ɴFuv\nXtBhDHE/_ќ @TȊ˼G>TkT.#V"i<I7p\jE٭Wč'v,Bm:'&-" -a`R:h7k@s!1`w+OHBBd@.7Ybb4`aqBg24uăϼKK׸.+ )D/o__ _Ӊ^__gKǓ]6]5x0dM}R Re( v̰)dŵf--su[aJdŞǵSb#ZTŢl c(ń;=_F#!04S̀ Y .jc°EgNE|{dYUEvqR=:0Q^7wAQLdJb''l裡աxtmHT)w Qx28?3o9 'Q'YO-{ςaIZ/}s|&Lb<%ގ?jCg &K%YbYt\zHBa)Rś]pTDfDCVz`kMc͙7Ew dci}4C W>8c"i"[vf[%Ɋ\Ҭt=\'wۙ1lM,yeϲlז.7coV.8FN\VO"+kF/E $2=LXDx!{ VIf_(:`"dDZMeHE+U&޲\=<֝6絘E7pG DtF$?[26lam>M#l ,SWʂeV/g):{yZ2B*2:5dg65Ib )9%h3pXHpt؃lC3w3~icG(VȺ?rd1l؍6㵷;5t4-4ޅѺ~pñR8o;`P2Sb͇k@''#Gf:Kbc57qow84` rK֋T2YCۣo/;cͳD:.Uumʆtz)]>f}3@]a`*ATyQ"S"cb4 bA{x+ˬFڭvӿwJ1B=gaft /etY]n6)ڽL fg6MWg}wALMoY?.\稫n%t_t7&Eۏ 1QN}nDfs3(DE3Ti{擱@bp8 Plk6vhb͜.j2cқXY j$СqC!R# ?NVK=$(1e sl|IINaxzf0%oEXLחifEƋ L`ؙG z7j*fG@ 8df!#Ž&fN !po+|;v`wCZ9ν)Œ{F:KR u'}C+9>k͜}뉓%ݸ`[25ߏ۔%xrDT޿6f0w"v$*_:]V4X.o>{kɺ}r}-TYsgmK*ٌ_ƕg+\-On1}[sjަ/~pŕ5}RZxu 󷕯 8?dO^U6̉.FN )kG.+ȐGLQ\H:(齴AoڻYfpu~W*xxqyYVP^"E]nY~zpWUYo2t՛ B*$,*T!?).uFc9LHtֺe FsM4EkFh+5J|/\%x&%3}3zp HsUcwESƪ}[t"'}\q* s *ضցC?h1jSGjғeAR}0~y2m8n+]sqDYOjy..ӓ7eՓ Z@TSYً1'4H6_O_{wz5=Pa#NKZ0esv6~%i#2uT3XNs]Z[[rD(iH&A6JՉe]u0PMN6Hz;JwB'N%a4zZq+*U2 뤄ѶlAbWZ3?w-uevfkTHpCEszhHQ^:o<%=dRdku73j5 u{ğV(&fvh[h6 S86]Ap`i },QvC T:yp7'# WuyUWuy,??w _^[sv哾Gl.RyGΤ>ERj9Pij]  J9N#bhNtÃlCǣc@w>ÎMu{ M= i G` hmm.9mQJ .g` GgB2: ct(B1l 0Ja -E[oFOXp3Jd*G6T"8H"n5j3Vk:Azl=]$Gvzkۆ:VT;pwIB,O{Az${8Ef_9'mYyЪ``H#(},ha oߝhqҏwq KKy'<A}={9 ،;uռ4hvuѧ}'u뾓uIݺٺٺ`bU~eهHX~B%r1HME  |v[.;PE}x]+aϢh.?Tv:Ѣ6+=JfDID\ò:FNS6?{'`Gˉ4~#&0g_BOpܩ*ǜ-p$٪l(L&zք=g9CO<61Œ eNwLQ* J_PNOlFunq!`B04ś 6 X&9eb3x|0"bff{>*lK[ h_'~Ȇ,Ed-G_ o{4n ^٨G;Po,ONlhxݽ˦۾QIQIӣh*O}q=B'98  %jpP)[+R}j}`]%.`>yG%FTFT鄺1Pe ޲DYa`(L >YIE%Q*ҖH11 ^ ;;IQ$>r|J]cd' ~UMj_$/%<\Ln1oVnZUiUUM78:6ĩN_ۯn,mb_ K~_l_1`edJǕAVZ%d"FL,kÖ{*+2^(Š;+ l%QWU%ɑtV`r6T(۪+l䘓񾬚@Y$PWIHH=$LwGٔ_?6c\/:rU/VB*ê8UHalkVo֠.˴z(.|7 DQdeW- }xjd]<֬˨X=vf.sV1ƁMxD1,ň՜K< «[F+Z-;UXolx,6p@j){F1ͥ范( ݎi]0!!.;c)χ7Jm*iVIPPM2aFD|&@4BhI 雴R{BjYj9_: I~,]eZJ~p96DcWgї$@$m+&P^E%Q:D NU}j~uLD|;nVv?_6}oiKCN(ol9ZRƒ6eIqE2!9gLdeTakNB;1J8+5:%.֦rs E6{Fyk8wkf6ަeFiK>VRZ;6pbd 5n7X@terM}<{Oyi+!$8i\6I^!S€2d#Yr9|YdrYӉ),*X E6X+^xh2YlL"fc.kMR-)X|ԉ@6*yt'*4xH=EEGb/:U2b8IKpTG4T =r14X3z- ?M \Qg2Aeht0YʲDҖ]pv#.z^R4(Ė>fu`ms&8JhUyy/7*FYdFxN.QXB%ҽgiEF]-K`WFXL3crRg2ۮ{e(xCQ&su Xa8+EB/WUiٞ҅>Eѯr{\G6$?=:6 FK?~/HzߕX<ٝ]<;MşEoO-r?ޜ97x\b׋g7ԭqR`R}8{ԭE k}nH _rN0")+X%%N)2"S\g ]b/8W]Ij=O),uf#P Q3ѥI4ZP6;7#0c;*b-1 d HQBcS I0Ev W{"@R\l{xhvc"Eډ' 8VE;I =z̕_ep0 -gϋ7D;K/}[RB]Y&Tc^&a%1+\Wu8Ц8v%Px|"֠@+B.iz A$X}ViHVV"yL5*&򷋟g3׬QOᆰ-Mv]`Nڢ2G : ∡b`Z|Hg$T7|u.Tۆ _kĪܠUq{FY%-LuQޥkS-89cMJ`eL@.R)u޽Z J0yG.g&O̬t1{_gdẒ|qsdt;ߍ2G8w :ÿycM9$\o䔛ݘdzdj0ڽL٧ʷ[,MC]ӟ/ϗ^s.=ɪ sV߅NѵgtwseU(x@Y84zL/ћ~9X.?OQR`aJjL9~ ,6}MKs4V%@l0\2%[peRw8֜&pMZr )Z ch$Hgq2&7Kx"9R\6/OFSvȠ#gVsrL3~,rȀ[1Cee (<Ϝ! ٕb.g-WlB8 dK /(&P=a! 3f۽/"!ԭ}0f}kЭ|)fkYc06T1!^Q=>[Q@/GAj(x 3bv1V2s]1oMh~ S)l~2ula VE񶑼 B0f76' q$xkvT>UiCJOnoF?݃{xФF]ݭVkLFIMh.'hGOl?<҈X[ "\ K JH81bcC#$1a*{F͇M[D]*oiNGD.=kCzrCt@W],B$X-KA 5'KIgw8A( R>}NߺAHmf";y/a_(Le}`Ԯ:v!,3#d˖ҫOwLJ;lc ;<Ġ׌t$4)ϯ<5 㾵 ۃOP!nO (u^R+XD0NJ581-ܾ N7߫C }x:4EÊi BR"4 $!YB5L趦EzDK]g%z4Wt5>@-syN7%j NjR# d]ם7;T\ʜ*n1kn6Vl/Jz̊㓧:b Y#XjbQ&W2ۛن?ٛlQv&{WRpbMNx)H+#è4 [;~1EyJ"s:fĿ%ۈ#n2z_ݟ>-^*|f&#{;Vߍ=db3. /my/mAY[#_QںxlEPfw4 U8@`ޖ^9|.d^~ɸ-šp졬 fP(e92bBM{9f@pIM[ fpD6+V S˜I%I{(EtTMEkv \X{۲"0Aޢ4$=p;:A"Φrֶ6, h6 15; ,/8x-;?:Ҩt^xd]dUVT?/Psg.İ㜓Znm~96~OkmOzhk]v|@ιnIK:L&9BWzbj-=0[k:0foZ霉o0NZ_5JzaW6d)/6i*׼anÜ9G%pIqWӝ3F]rT]ٞ[C]f>(+G0Nv&Qˠ:F#|(m3Sq R{/6aRx!s ׈6 }FnۄS CGm@l|2cmC5mȊزMXchHD(58I#gM,/Jaṽ;$Ru۶:PB]aӪJvjYf%l0Å tZpDo(TD49' 5J63Ջ< o̜l` hNl9'r51GBB bmNjUdIm=D(J4GBq㐩hT*A(q3I n73"/XFi&F]/}Ɗ5581cwzC,kA3#49= 9=)0+_n5h="̊/4TS3`Ơ\ x(/22<ԡVp3l6;M_ݒjАc$>1^>AmO%evvU=%C"}J+P߳,ZIH%5UX@E@0aadkyТt s`6nPC$uIv$uuΌ3NSI -'<-&lv\wE?"ڤ EW93.8Ńc2R;L&<^q0(.rI!f{%r/ }Si>-wٟ2`˭]Zsx{uG~ן6WU^}B/pK L$"yӻ%2kW?TW_mӦ'-1O.o~܌HcGX$E2 SD4 H][o#7+^9{ZE~dv ], VFIdouq6Vw=nu#Y\H +K78N8KI+4:R*99rIQܳl-m:5oSʁw.av/<-,LuV{'~w/ЁJBIc]=SZfV45D07}FJ_{p:/Ai4B~gA1e= Epvx}$$Ha~oO1=zÐN{|@5>傰~6fFIf[JX@50˩D-z n!aI8E,Ճ8oV`dդVox,[o.O zhϟf77y?>osD+kҎzs|:QP!IUڸWxCJ}c{5V80k؛ [ u_2?VVOMia4s *,yȉ C8RAFgC"krcyjj1{ ( I.|&i5=aaƞG1RCg h[BBP4/~T 0V*PC_<9XX|UQȗnFҜI7MXUIW3p#jqԨπ; Ed(H%8ѦQ4JOѸIH2ӌ>3u mɴC5 ߚgp0HH-*e/c]Q2ṮWTkRIקMdK5}?-y#(-u4w#E$291$5q63#E76` uP\4U?ܢ]*Ç{߀lɘ4%^ix$nRR5.@"όi&aآ)"fX%2BdZ8IU_2 ;2yJCU-Cj&vS)8~*>#tEIFny"HE;j/\v5#o7e* x:ĴrE /ߺχv29m_K~lLV̴_M/J!A5Nk+PX{ H_n!2qNl>PR@~Xb_j]DEN2D R¦,ƣ9b)/<%NqS ( Vi&>!lXB|1֘&+rZ^%`L _}4BUE_A]-.F7a9Jug,<%*գjгdi_!S 'q+0^>tO݀K0$)55Hmヘ'S2}e.ώW'Zξ[TRQ͙ӡЏ<>]Ά&Jܟ}2-Nҝ,<:]qW"U9XgBƓ"< K_͞Z5W!r>1)x>В)6۫|"6ǧHAȏs;J?樿铍vϧjf ߀낕6>ܺ}燿M0}(EnZt2K1? &_B KiH`%ߘn])L?ϥg#}yWh^:R4vr?U_߁6b}4t !Lt"lN&Ƭ6tSq~p;U=@ɘ N'㤵Ҥ라|NŇQ%J%^R}ɷe~@=8%V,YEM0p҈ * sJgc^jӐۉ2^qC9aA|>8U*{+@FQ# Q?`>z"WMX)5"PF$MUe1U*n͢9,cߪ/H&UziՊ*b{z5L'R2NЦZQB5`3J QRq- s7*lkg `\)(MI~YgM$Fkٙ%8 hFVh@4(CTw~b FQ^'k$<&]ȕsb+,癆ğLt?ԈqNn1efWwN@4q/؎Ͱr]JwfcGoSƕ$sLd,iġQ98cCz|!F*׾KEY>~]Ul3HxGfMF1{,6!fZKk~ߩHb8U{Ctl8 F#-\\,2w׭ꐫT/ߊgwZnMA[4yMIq=WᵋM>%M;bODI:fI[H~2gҮ+T̑3A a-W2:ކh#+RJ#"Ɗe86~ˀsK~:JM:Rb-Ic4y(P9́>,D8hUzŰv%<Q^z0SVז\Kr5a:Ss2˄A:Ek3/ف>!:Rܸx@,`亣It [h&XTÜ1߳FU562Fc-n|Ր+7֙?v Ǐp\_\W[i+PFwúq0e`Z).rcOWRޘޏ`0cG?|!mB*ڕxuc?f/aWct@3.y0{6yB!m9n8}L1vD 2?΁hPf E/8v' y}=轛N'ޮ˼ i^47`lU&6v]CMu8{.22<$JvBʵzNx " k7E ,nU)Zȝa\z&H6 tN$& RL;ɂ#jr|]"AAW~. xhϋ/>ШtVU4@Et5M.! h0xMg:"TT.^5/ʗ9SA#=yp%>vp*p:|%#p +&:+yل^aGR-m黀.HKJߵQd#$cwhF^O& **L2}&7U?ޥ_i{:MF})sefoMf6۲]6If97\yˬĉ5JD08Rt׌Ij8\9GK{?XpM^l bSԽ]uO "mvBz ygo~K~{и:frgH*qpnW7zgM|ᖈkUjU$f\hKQj5z172Pͅ%&. (h\W ]~tgpn3А5600"'m:& (/Ѭ0:\\34YN`p}JFl}T0R>tCd#rsF@@PjM@bEbқ}*tF^&G+g%WRm@5;%OeELk5p2$:>Rkl"8ӧut_jǻN?wI<!WțGwk!*OYq^:{"߮γA"d,Ҫ@M Z(Q[I$~<]:)I6_iz_](ysWAO߾fN B"I8KuVqPGŠ0HĈ*j,[$!bU1r5a+,o7WsӷW6uc`}{Q.Ԃ Y1q% nϽ)fn90mMN9Ȱj!YoIkqOFrOH7g B}g͑?+!`4_R N*ޅ/i̶܉599֡d}oe4}bbp? y+˘d0#l@v9 JM[d{L+r<{|[h4%SW'} :9  /;Ŀ6ݬNp:Kx44?-Me3mdx1ܚ˪MkXxQew0M͉-DcFpDuET JtK]? =?"M#ScIzGPTf&(uczw PKj4'Y:c'@SZXںN6]er ŅS -g~bwFC^\6ҭ_3?k"RNo\[Y!xݤޒHm17mr.~~W\<ԽuoB7{%{G__A 2.29}N)Q$g#YT Ƙ ŋ{'=7I_B"%Z :=Sϼt"?7/7/S/0d;S,ijqXF2Nԥ0fg !%rp]b6u_t9Xv\~=x[~*)ı_DW&oWb qJ,G=ۉՐ 㛡 R{Ndw±)o&WXIDz6_5,?x xj:# T?vK34# b9\)wKG!\R$CVl420K#8i]93[mi %HTF F_iaCv Xea\kP0由0HadtbT5d#{Q~HxP)&Tq㉥O1%ɦ䃜%w<U o&5ńV]!xCq6$6@M"}VK6dQ6ZHsXu-޺DYE,tS#"V> `U*QfF$m]վ&4i=fN{˖?L`\2"B7)A0\iVѺ1E=qMǷyR9V^̿|_x6j ' O -Hi IuCf&&0-ە-Xll Q1c"hh:?\!,b2\œ$ OR$'EAR,NU`BK3Ih Liwޚ1OQ:R6]n_8 /u[!_] ek6[KU8}<#)jVk=KASB0O9ʄS|ւ?B-&PK+Cϴ: sO>#^Ȼ(^e^v$CBpɗ0D.14̨JN1L͓3LGMG+@(ٮl Y%bB&rsF|#?p6F7_ NnAn5Jh~0~ )JhGQHX v|0&oF:U J3\kCYjDJ{^)O؏ݞn[/.*)Z:'Åę㗀mܧ7b5_|P" 91٨e:`!+3N_[lcis{kVk-kc{sYo,lS]o['Iu^S(ǜKyV1> b^mgą#XHY;AZ`e")kю* 8-=ɤоieE>?y,cׄÞtǒ@Qwʊ|ҼU lH, nS]`3L;ĒbF}+GT<¯ %M=X3U^ޏ ?a2W"#ę(Gix)@OmI%^ֺ#| HsTzqM׌08F'0Ɛ ;~ "}{93pGV#YVw$˧ρmw$ЌĽ/b%Nސ ~$',UTq 96HoO!l Pfma)zaT4tL7C˙D rSӔԤFa(8 KD+b\ 42"4ZKOΘb32A^W4CHBS{>g6DuUHFR +2 1 Ril#_!2X`H1;c;/<[Ī%n'H%RVᖨRVƑqg 2BA%^=mT6iȤׁ3R$9!jŐYhzbǠN(ђDd(01!0$OR?"8,~Tls;;[YjǷٙOº9ٙ*2l=P;&?7&~ʇΚڿv3S3rN $#_>ü ]Z XӃHs|QtQtb7nl<{yiOl k 0HgE1hW`Jn&/ߖ&=cܧ^CzvH<:6`HMa2&;Cގ:'̲$$9iqm"ulHTؗY8dy{`EguHӬA<{r$P^&l0hRJ ' JkD!]%99iBTn1؆ݥys߰z4928GBWnW2^(NvMI0$RN[A'{%`8߫] ,`h!xX#>xZx#j2rʯD aKmN>'%,1ÜDŽI$rJ 0EgA*?TܤGMҜYŋqE 7>4.[ɚ%S=ٸ4VG9{Fh#0_Rfp~Z2yl _L==/'|U%V% >_IY #F}6"yS j4D Ǥ3wpkę82V1f.2R&h fS@h&dꝦ8gH|dCRY t e,$##D#l"'I.g^ٔdKIgN+2 -0ɼ"/ٳr51YpXV,F&m3 .w)V<}gm74Sઁ$-{ W#W3zL6.*ˈd]%c]ohJrBKi\-5]2sFiлnQ%VCPw\8=4-l[ &qE!h$Rm8j֚uDK2,k%#}BJn42ۙѻvQԒkѾ2𢇟aIʨ=EJ{c:˄|ti;TE8C r('[';ꌰ;4lxٶVOUQQ(FhĹAqooي6d }  2s/@ȺL(m sɄy£G<΄:c g'lNyh1,ɏ)/ r[3}_+[3Z{Ѐ󌡗ZTHM_<h:aq~RK~L.x5ȉ72+rISiP9οÕL3M WzJ>""&Tgi=-* !KU,XR>V+h%:-=Y e!ܠvߖ+H)}k"Qe iv-Tk.V+K۾,1<<6_}(&w'0 w4M>9#%~i)23Yv}߱rV2QX4]DĢovvAG}~KQ02h‰d4FEj=!Q&*-H9w!p]ʶK_VgqfN\&WEu1ax5}C_rO>-2*p:2MD&'h:NG^bk9ZC@mC{@TgkO>l@m[k;U@)Уd1`rLk5S;JDf1ɑզclfUi~ҭpsw*vP=d^M)ɣL쎓}aaNIE#֙|qW`lkzM j UDi~1(jpJa~QŔ~i-^*(і`BͳCwX+S/sfyHBEg0gCZI@Dg4ϔ;i#mV;&l7.U>^E x#|ZKjOhI=& YP&iD;%51GFM7 y$-C*)$2ָ>55,T hk8adc)`0 ,fbdM78(ױ,pf ̴/YPV_W*AH-Di"ΣZAN@[Ls:LJɯ)"Z:ٚH6Z( AZe ]&Yy3.]Ҋ> 8]PSˬV9\Bֶُ جɂ߷{فtcV'ӫċPWr+ylI 2ȑl$WsO.((3p 7Ɨd+T6qVf͵[ U{^V֮Ds6\8BP\U [?TS Uoj0S=}B'"Yfbl]1fO뷱](KvvHW8D5Tc[ˌn쪟Au^f+{6~闋q57_4}wqCp0*E|v9;^WxWAPM7v܌<<Wdp~% .k<s(gc lO`<*Ekpֱswc _Gfm: ZuGhzPV/"Uo췭2Fo\xyPN)LP r,dh7o+ZvAGrlF=Wp4F0c^H3*{ rf8C+<xU W6,"12ϖ{&_awdK"KEmZ gs#h#0O!'!U]ZHhFdpap)z`MkQUu+p+aEb*aA&uTfFC}pkEi*uiqU,oBÅK sآq*΍X 8*(CdKGe$nC5RZqY8jnL:YYFQV,m./ad mϕK8WwD-㭖x]i!V/dwĭ $`K*~U{HG^癹UGUVbdTxIM#r 3mrA'Sp알/;aaJS޴ПƗ?+pfXQQd/KJM֛9&#{ V-OX+tFY{V5 >5ή0O$L>3sF + @v%搼UB񔐠 d%$XݮػHr+Bx0e^bl0`>dTj4Z̿/Z%TJ3ȓ#u9jO`)T%LcN̝'!_JM[WGt< m\޾=k_ <A͎; 7qs=*?#hm>\TUcJ\Af- )Rh`(GyiN찃V I,h8FɳU6*0AJlctJ]g0x#Ѕ* =ꕓsG!׬Xeة8K!M|Yy3&xsmף`)asJݬ GMwW|6R0ISɑKT13F &fijΡ; UeV`J#2$),GBƎAӠZ(NOk:7xVt'XVN>pf eURy*B5v$+$wC ɉgϾ99ϖ4"_ +/XonG].$$k-8תѤqo#ԝ6ou{͘fl]dȯ Z<5`#X5L.WM6P[chmQڋeĀXU Fa @#L7 kYϦj3b(5gyTi<[QȤ ߭n.HF~{1G"G 5Ulgv޷~~? iE5,)dY&S$/e4I1P "F: i:kGh!)X0Q' dhpñ٠Z"OCuE4`Xm˻'W=Hp2_YjZ!29QJ]W%e+k!ZZ̉@1Jy8ՉKST2(1yP=YY %2s*s;P.h!8eHbu;VtY%dۨ͹m\TwgIXT;p=<&>`ز[f*+Q VU.6D/^YcGYa-UT&O$9=PɩsQPlTW 9/WA\*a]K֕'5v1Ƀ[h!8mhV{ܭF5% F@ns_kpݿ[xѫf_pŠd%sodߗgllBx۾uw\~?;ptwyf8p .~\v˓sꑃqr{ƹ5olun{m!nV:T.Zh֡G{A\=tLJi߰GVʋVu;6I?z"# ݬ.ke5a!`tO[D Gut>u%k%!&m`2d(T(ji08G+F+PK:25"L2qY~{nmލh0>pQ,9ÑKw, L#b]-Js]>F #Gy!.+f? Q0@uUfk<ֆk4␔UZ@7*=D l"Dl))gP*>rr`5:nG SfR2(?*DN i\j`5l^j.[h3,'*/Mvx޹wZ7%+w)VMVΑ[ ~-6Ar8"$2ro[XQ .ݎʤKDF6K{>ZʋrB𯅓Mu0Zts+>F 7g!v㛾L=|9 >ɗ\4hLP0¸:ϊۑDbo׼=a 7\_ S#HWdAr q vdӛwEՒX*\ Xz ҍcm4T+#WS2"ˌF^Uk?o3a-Qj{GE .ɼ@`sY.>RvƎ!A]B^;ȗym4TIpek-x۝xʜ2.8FKphkӯB4Zxe,]6wꚁ@R-=9YrCWkΪ* fF䵣V gG.˷)d%ڤ :]O7S$#930&ۄ3z[s"0ZFp@X (Qމ%YȽ j4P;qz1NN*~ԁ|Ͳdg)d1u:J)Sڗ 2]fBےm{]0.+khmR43oa8a<&ץ,8L\c'r!=[Yji# ʯ(A8f( ,I;WxeG!XjaJ9Z ` }#Ѐ"cнMZ?`Flk.cw.z._:_l`^wY$e Ci{̭mpx/|T#WL8lmNm,tFYw(ԍ>0RӱGWͅ 5PKf3lD2{ll1'xx#Ulle%bl2S9ʬ*;_MuDvYX<'B'.*pzKv0}aCg c>bpJڙB#9ٹi uS:dŤMWl' e5. ?Xg5V޼bܘwzbF94/Iibo:zv:zc$]Sg΁R5S4 K8lY=CBl;ՉsH)jHIK&iDS.':a$az)Ğ2֨(<1?M]k`G089GO3.ei;ך-pg~4ٗZ-Hs\"s_}t CEi *nu\ I姵`NEeMO:;o1p:(ygbճXJ{NC(sa(p9h!keѲkދgf#nٙbcwp,Ŕ.m5zVL+KӇ48|T^N.q~˧ϟY@ZUV`e+4" IpYFiŧ_O]Ų\rl)9 NJ+,sR`cT8~2}0=ŸeS>14eb[,r}mOdkSa,@ KWH1>^San ]x,IKD:uSaqFQw%][y[B%nَ) E P@7Ata^;W._@!7wPM O\VA+xAPJDkNVvcDJ\A̠@6(@C)4@^0#w DJ+]8K\ 1 |pabkln.sYqٲV6˵ Du2GE`T+GX]H@ZJ׮`o34GL<[ec /Xon 8oρhjL_a{+?Z~syt[mZ_ <E" P+J7+0cg|PW@Z\*UaƗKKS2sr^!!(< kUGugQZGN]QP#PUjAmjXE׹)$Tlש8WX E ~{VWZaݝl߆B㷭x렚˵e'Z#uLͨPf=G\:8%&yLI3@1|~2=DZ(uTmuTm|e'(bRց]Vr ej+Y,Jl%+JRUTZxcC'Ur3С4! eB2Xևbc4SQL -v*.Z!MT%3^LC/k)L>}O>dLki@oxw^fL9`Ф04cQB.Lk=@Gvf>()x l@wBR7+iЄp"q iHiWJj[u #`*0mgM@B |:TpL+crx{/ #Uic0Y DuTmuTm|u(9֪U3WRPg)ä|0#Kdp-09i ӀiPb &g# Ɗ RQV^JV^խHU˚JE9\Hsutd*PԲ| #YG/S3\U-sB/R.Z ju/ a? ~"ݳ ۽^US#gz0:~ E+(?z sfèNi?u;`=S-y4LgNߦmz(%upj;!YصYO/Sf8>dgR<t#tHY<܂EU2ۭV8dۅ6^C BKD5J’E>C#2}UD4R!*")@#b Lޅ$[tnz;g.B?jcy >:y<1 \ o3<\0\)!ЁL^aܑ܆ԋo*J ߥβhgx x;,Q~M@2r H 8Y]lFﺃM`Z|Jo6x73Y|eno6ntfKmoiBU$*BӞލ@WY8h XkJ;H=AǸ̇Q¡p,Cep2aUnQ.!^L4%LuA:ZfɳEhD@SN3M,F8"!V e QJesZդ| g6Y3mBpYd[p%Q;z(A{W(:NC pUT{ϨĖ_F V>co61S[tQ͛7y&CPG~ײf9oADqByD  NiǵilU0`:3h"04U;Bu-%~u1rLr:Xȡ9f,m36͠S10ZĻb`Ѽ ؜!e\`r&mEl,;DV dǑ!@P'iE]D(ɘq`x~a$a9L7/_8F ҥiOf0#{(Vʂ `r VuUKH,PQP⨽EPI>pJ X2-![LE=++mX <ݢ-:Yڢ-:YqNVRIòq %Nxi0\~,3)As0est00pPvrz#\U`{C<Öh,xx%ǀ5᮰`w ehWzso4 O0enyG^S}eO\rh>+%q"y\nw4+r#-xrDj>4LJӵE$MOTnQ (4G=HRq0rdμ}L9uZ6LF.80Y20I|hB]+b"oVknb$?(nUQ, v*ORfcV>x-B^X0jjV?hӛBD\RN5E[QDhs+ƢY.F`}q-+цRµ붸Pn?=Xb5vOyq\~OvXhv0AuV 5ckbIŰX9^{$ˠG *@eԽRuw!> ~{ʒʕ[\2RؿF֮2#ƚKUP}B+Gp\K@,E"B_ [tPDqـ(P܆6 Ž;J[)nܫHq!SXPT :P%u0SW9:kWkvZta,%jG1&"6k$g8q[(Q64ḏ~86 m8qֈƨDTR ܲ[Ic̋U8\l(Δ@Y@=e#-nsnIJAe]R z S*w\*]kr71A[@G!5Y`<]8o|Z NP.ZMQJdr=>$61Qwnx m(nCq{ww^8zPd"L̐M9ol`0 8sw/ Jh{Afh 1ç5a5V$RIԆD$c*` \EN<;T4͒-gTr=L!6Q}OCkZچ63Lɭa5si}}=%TV1e)aF1Pg] 7jJ#H/ JzQ%sKa9G}@_O,j!g]wCsaXHy6?{ǭ"nNZK]H~0gl6sW[,)8wHQ+ E3fU#dE.Q zWT_3gJsG%Ю6#ҎHш#ҎH;"툴7s_ |ֈn"..Z!]B"Qy3K8x{wfK']lkZo=*_G:-@4 5:nYف[-mG6>p.ǟRnI Z\7;m6twS]_6{]sSG>4wp 3ˠ2џ}ˋ @ۄ?HѫUrT74*)Rѫ|\bX9K٭b@{Zl3ÀOi0qkqpaZ>_ܝ{_vCzs; ( u5vmG:b( \H;:~;/!^FԌach߲k@-Ҟ8h+ r.mA+C%&Tl| s"$j */`>{k̈#>%fw2nvkGANzU9swk[I9hk:]hG ֱ3wF?.s`=A`/tws20TT&UUcDWTE`2,ivI%NIdAp/g]gD?H?#3H?7G? ; @VEwr8u+wʺl|1._H=VOe<@Cܽh?k'-뢺k={#2~p|ف5դA*'](B` VS!TkSehm3Z+YXaq)W8^1Wx8^qsI/T wퟬ5>btFہpMD 1V< qqqK#\oZNu4 ;,_*Alr`XިGN۞+u̱&ߘ*ʘk2#>%֚GvhGA(;qUHٹ MLp cq";;B5%dw GR>j qhHN>]nswNIйc @mk.VV Qs-1&Ƭb?~av=]:gn!9P蟵Xf.Ex.Ʊqb=Ϋ^KxcpE%w>΋JۅtgKO`E~=C|=[T[eG:T{HW!z88!LE7] ;FJs.ɒZyUKQ\MencN18sM"W )}?ejG}JP}LFvjG9(;"=]/FW vŘ܁NSPgi GXhw|&l9  b_SUXLfV4RVkZXQUfֶwf hC}9&)Z n[H# >%8ݑGIp$oy ҿYK;eȝLK`C0NN TsSG*QaLA bp 2?{a{hF(ĸ3}kKҺ<-DV٤Ig>تS1drr&DB} f%SkO87v3]FC o'xwG3=1 ٙVo`o[tO}=SWBK?=ŏ r1N>pZK){R2 ~qzG<?%Gx,R7r!iڗo8%/JW>}8~Is/9B#-vflg8~g8E⛈rNJ-9?[};_zJ[|1ɨoY_Y, 7ȏemy+Uq@U*6'ݑ4_ix{yXG餷2 ?4{X^%7]2m~lCx𻃃ÍO/67NM)޳DcRMvqfWh3;G;w8΃;КYVbuc.V~4L:F`d(GS, ULƥ$ά3^yg3{F%ZK]:8x91Evf4) `I讘 kyPi*#Edղdk l0 4JmFlD4x+f)k/*P=tCQ؜+Ph54t Mh2$nV+ (N5 Տ9ОGtALJn&f]E J桂dKBC- xjT9CT9>ز}A^\+:& ^ZTx 4]*m:-\"1nVNm҇tKJZ29ȁVE"$8SV;\)h-2^Q눙___k@軽|]RU.h.+ąJRXDxjYG{GLCk(^%h4{ pWh^"JرB]Y1R$hel,EW@l8A)LM`,l4a jZ%VWЊef?L̒h\Gq2v$Z@WHRX10W 6yAMBd"}ui=s#}Բ >|aE~"|53{r^YVZ$FTHQ^ctRGEzkpA#H*bYBx>0U 胓Zx:A;$K$ЍZH^Ikzǵa㼁EdMI^`pKQ:s &ceƉZ J DeօD#$նqr9s/,j^X,x'a?zz$wx:Zŗ)䤝f9nvj@5 { 6tr__zi,OvFJ|k];|iNmF9LO4wgJY?ƻghB&ovw&olIuPoyis>\5zʋ0$kOM's/_Լejd+ o'(K%"Nbwe\G%9%wrRWD.,BLAm,m{/)%WҜ]>$6NQk ɥ7#Q2"[@,mEU}OP~a JȢ eJIb)khANz_4(?^K w;aLa;TW0-H#۟+NERm۲ް$_@XtmǾ"﹭Ӌg\A|) aIvradf긔!+O^p_\)TBq$jO wAh KlN5O,f>,i`XҖ2l1\[X.9 Ka+gmI4[r \R[V?K84R |ʘH Z"@)Ah\,2̼=uʷ-Z)["%t3&h֒ 20[ӌWnu/8^;Xz*a #1䅺Z!xQ3^T0\Wْ[ZjN\PPqIٻn$W~BHi`ػX`'L63Q[n{U/n#N0["YUbBqIBmHkIi3 yø|l4-}!$P; JqGgPK/  ]ra.2 űSjv :Rv!w}qk`A!3ՓIBlq(!4y\[~-s=}Ң;vh)(|UA;C(́#Nsy )\o,iIn{ )"5[$E݅&`0 -\2U HA =$[6v {@qf<7DGPD|{de3Qv"13vRqjo^΢`vԢ3SFΑVV oarQYoqT@)~?n{|Xm#=>["BjHɶIw(;;@v(0e=4GN\HE6\r\tߺaeʡs9$]84f U#iHqk-oJC#]cg GE#랏ҍݲ> IWɱթ޺+F,èu];%+ͩS*򸋠ȪURY"di$)SRPV84f8 x=&r*/_iZ*#:ANFbBo!Pɨ=T>BpcwdwhnR8+3=aZt3dJ@Tn8SʭkΔMCOZv0g ']7xjAZIި`7={_qjo}ñh&šu,RPᬝ:mtϰY}%eCСI }-dt韬)o1Y%s'iև R}Ԙu\zʚNi "'DBf=r;eDYYH$m "iI>* Ɠxiy}CQ$9˲1~:^=jGmy77XD޼ EPQ6JZDeRhr~vxځCr -Òtqm&rr1ݘd 8G$j(4JI'C^B0K3\8 }ܭX9$AR5Vܵ-i#w$'Lc=W(N@H;Fqmwaiyx~O5J={Bd^~ Rb}S屷z<野 # Bub";Ddָ*"@6E2gV6' N_r\.HRoIu9r-g:_qG|BI)Y;t7dt;g '[DS@/d!o8 R5D dݣKu|&KҢRH6fܐ{ RLکCހ_ϗ.qg^oYȕ#((vyEPQJa8G@7;$dE*HsEHz{{7ȹaРIK{CNHs2&Q$0 RG aS %%=ԕLQ)Vm+4Xk%&mR4P5YK A[MuǝLF83\$Vu3f Q3'82w,(n6 x )C* ~`{Ф?s ێ,7~>JcR3fjki+0H^(%q,:tza-9fhPqwpPk4#@cPY'w>{}It\\gIo$9`\3R=*1įQ &z48ۦHҰ9=[FPF;j]t,0EFqK8W)1цsسPg Yn>]dYRsiGF@v)iӘ47)x7 }+Mc-C..4ٯ| 4BWlN:ܖ3ݖDnt23E&n ,Z œ)gFVFc;Jk@+` |2=-iz+}${!"DUȂ}AQ} RE+]~vČcl~P;Դ)^*w~{L`a2B8 -o,176S 2qzp U8::I4M,ϷyqmI$w'e}R0r/J?F/ $*]]Z͜_/-M{\~so^ӷgF$͏~x\g'׷pV8˽T8IW+'8`Yu9ok^|WV߿9zK*'OڅpI댑XXOx4,hڍ6i~@}ha.vljK JƩ Q*Dr))4>G+8Il9s PfrUFZcz'RLwDt-ɟ]DX8;uVgJu~՘Y.weܛ-f?l4>YҶIa.W,{qvؽVkwӾX@K;<PR#()t9ۜSNWѬZ"xogQKcN7QwuFRYqN3՟@*߮y+$GE+Y39Ƕw`gm$ sl0Akegxr;eǒɓߍџ 0'![4SfY*vÁ*_tN~ۏ^:N[F%z> XQa4#j)hޟsOnyN w/,3 K•'>2V%Έ 6^q Q)(x]qMA6)jd?n B^k(Ӈ*-FVI5cᣠ=OV"Bf_zH8eڒ} Dc>4X}n#r+EwN2T tfU<94/]HFdmUUEEPt"A Y5|"A EO@6@r^c i %%}Z-JYOV|1;1"J7NgW8N?[pTa< aуQf7ǫյY},F1lLt!]kH2Qs[ig|=La#_9wͯ(:>"rx=yL9sw{*¼f{KFD+գoGDzIpusSV[Fx|XRQGm &**tct:Dm +q`rQԗŎwZ9P[9)OZ9-"КM9뫗p>v~85>Cu%}/9R#(C Ψ/P](CAJErLHHyp  ȇ;[D̽$CvTci'ho͜kXo̹s >sb kİEh9:]pM>#(A3=rH|R~>Vr/VkF^O:C4$PU!1$q0|XcSTo.YG\!ͱ@@pXX2m1\U388Ģưάڴ.ԙ%m´ L9cUug;ƴT_i 9֌ c*(t=.zys=w= f޾/܁>w~O?n̵o˫֯?z~}}~aQggg>8x곬֗\T`,*}ܼ^?kW:b%R#8uRp' $j[/QG? (aU4ؼdW,Wn{>m]!&a.V!g/f{%9 sgvģuԀm#,$'3і{Zq1IbU ȼ&pUuO*2n ՉwTU64fnقn6ܹӊ*OM'b)hT%XmS֐֧[F!OB]@~&P^AdNFKa9EOT~d"INW+U #ڴ |C(̬z"j2M6QN=N ӘC# )~zq=P{ƢS.@$a%ph2Cq3y>Pʡvd4 BMsN8JM9bܩV#Ys&ʪP;W\O\=| W۫O7*~;nmr9\'I^+:_ \`r.o?]gXkfq,ɸgb\jP EaWfRCF9?-ͷ2LrEX\jOzꙨkY y~*}0SJ>ufU<ߢ]+QdQp,N/i#1H\r9bV!p=[^_  )-GNkG}Opf[vVֶ&RwjI{J%9/>\^X:*Ft,(&ܓPXmZ`HB^k+(5WPWn$H0U&I$;GŒaqP! ?%}]G=Hspm 6;^v|s! CQ-I\o< \[Y+7BtU&'<4YukGA&Ck_hmBW Q !qʴhPH@Ju /T s7 C$23R9MA?|o󰓮'bOD^CW%v^R>[;{F,tS:Ya3pEG=5 E#TLHJsZf>49! D=Ҕ9-|K W:j, 4<(Q KKKUi`xncD#_\Ӣw50˻юgс3t"8ut~(iSҥL3L>Br` e) 㷞x29yp?e0[):Rh8_˂ (U %$ցn0W;zKCq"ոE GGslO>U|) T9leC#a=xqϤF&znίWmt/ FmW6nʥo!Uw hhYvNeh ܾͷVѨ:"H,Phh )fgm2Z3R:|<4B{$^-oJ&Fx|xj>Px;?zThnZZ RRpYZ+ mqTGLzٸSpo::[0*B! ynuyl{rTGwni'wF O!B:Qj]{JS\q:"C6oZ+4ӉJe&\TDFY|1+!w_U Ym;3vyxۥ.{UC`n@(@ sV !Ȭ5 U;ߐY!_U}.ھǕY}]p£R[}ʤ\9sOw.V}_We뗿xR%7fswd_ݔS oMdݫێ! 1ox2Pw`@xfs'RŮ7fst6iV=>߅ȻnL>{ײ&9Ll˳чO=w#L\x๏BqBy FqÂV)) *$ee[8v(E zs` RqKV$]r.Eiw0Y{Rz`&'7z`@EAFAƅ)Pp-*=`PȔ ynb6Ӳ#~Ea+q7j 2 JٳjuRZR 2*U'I\M>ce6r@8ƅL 6!tZgqY)\ Z5Xƽ&2ϸ \Pb1ReIW5!:5 2FpG&mЅ|u=bkKup3*F&@m+ɕWkͤIĜ* 6rY<VF{! XēxD(=@RA/|i0pH8 G\BKJb,r~ Z5np%HȾ35#C :^S?%7^풥•0[pHaXQjY2+:lQ b,M=: G`Q_;z9`H4uQ[HeѓxO_=9E+"JO "qDq$c'a QI͓  y=|^f&NG y0Aqq-vA5uV$[9D;V4(Y㔂2$j)E`"۽s1*dxңMUҡy 8Ո=VĞQ=Vt3.kV"AѣDRrga>:c=ômAUc!&LhrO -Ny**^[] @bwad$ꖌ~zk9w{1AUӫD#eU6nxɰ^M%eF9\ O"u&= hąY ZyL,%+`>sQT( 0tfŴBW׬f9tIܢ'0PK>@8jوot"JŝXD)"PV&gyкp<2CO)8Y-<YJO)e\$I('I_2*VeJ-7ن`!/)syI.moUυ\0JrGNM$ pD4p(:˩g[|bn!ِ2q0#)=UhxPhORp0UG-A"fm=jH&vN-8}T`z8n0è._ĝ llp9eJqKM^^Ln_}ñ!NjQ15Y@~OȒ!Ul IKuqjah=>Mci|r_)6n^pOn)7Y0誇7OqMm*(zjoF"UC:l:dBHgyBR[7["v6 M)O8CƇДP4 ٙP=WM*Z &;=s,PV[O4 Mر D_CR{ekI@p\>_dy_;KhϞA-fpH)gn ;">el*K;q٩]@(V%Uxnp,*ZZ)Xm&oJ]fT AZm\XtJzj4q]نRΧLyKY|yb=Fݡ{\/k>Q3ӵk %X]ڽ߃Ƣh{{3̃-ɶf^{ rC2R}NӬ9}KNjmv}2rL G7V&wT]Էє)Jڊk*}7A<`o}s9gꖕoj)CIKY/͏,K#dEv|aW_?WJ5}tS_ Ms_jRQa^%3+~nuquvƿ-^ >߷!& ?wn~_nd@/?f8ө틋:Jg"/ ,|3K~]e^~]F]z%9V$N4cl4LH!4`IA&SgK~v xw3Ƭ/nDK|*_R4)/C E[_@]1֒闞*0SD+w>-Ms/>(2/X(ݴf ԍk WN9cDs<]). ]VY\8ď ]1_OcuUBx=Tk0 n)JqFxF>J}eq~n|MO![at1 -뿔'#g Xz}0 2\M$p>,+y)dӲϓ(1٩Uh5`+5H66N: dnz.Ŋ8;FՊC<-hÕϏ CxB}\!+Y)Ŏ8?ިys嗠Jqݞvv 'wUE%]AC-|vLT67OQCJ@Ig3t;3w3p(}')#݇m57!8y*2ڨ]+>ǟ{8T{c*jڲ)яϙjQo 8uC@fMr:4FD;&D'<~|$i< q8Ҝ;伱A1S;p4WN8nzJow-ڲ5=[΋ vž_dGtuQ ijK0rmxճ5:it5- Vh[W / ?}K1>ᶞ `{>y^>-Œ8W1*E U>z4SzZT5~[`fԍo엣^]28olJ.0`o?R49Uh([S  4%%C‘4EDAm$sZv@rGRL:Q;q6 a,oMCù\>iȆD"v,U0fr7;%u*רٔ2W;oq>-ǽj垲 YiE vl`4yK|~4/XyV<\|2\S,≃)nޗj֠+!n>]px[T;?m.YlgϜ -H+H3eaƑz%ބDcz잉7 *.D۫".7$6d/O=CWFjqj32H Ik }(m2/nM a&驝^K^:9oٌA/ ƒ y岫4o>47Cpם2=,:E4q*H4RĆiJa= KBv{LeD4ڠ*(ʄQ(UH6پuM4m}qe /kPpu'rv09_8#k\8/8s #CγmTХYl>Y=9vynxV<7:ԳU=7/ ^LOtp1<ӔE lp{ڱFq<4Z;wWWZlcZG_m $B og Cַ̈I^Np:ړ/q-}T4e{IUscNMv_qFzKEQ-w3%.~B3*>3xugxǨЁbҸQ&e R2sǭ@*l?*h(ś%[a3A޳~*QFoIMUUX(Jr$pSbn}L\* wL !0,NUL@4fG u1|yr;~: WxǺo^"<|;^ INXUzcP7Rbl |4E^B(Muj2K)IQ:!!I*Tp+WWw9W:Bj"p`~w1|yA~ 72G=oD>HI'Km޲ɣ yseq\F c)_䧼 >!Inv[l=M&^ezC!tcIQψN,B *RT|1Ô8`l9ȢLSp"MaDTֳ7guo~ Kj Vwj-kGXtquhQ(bTFOs-Sz ]Wal&jR;Ob#]h|f jxuR9 bF3&䎻D` Q0G&$'6gZaQs[wZJz55ۭieF5<P-T)_ )w.Vi)x[5C%nG`8#Se"oRuS^[ڂ{Fi $Ce&D έDNABO N>.1ÇQϿUN&꧐-0RC<]E~䣻߾Z?u RIHbw,,Ƃ h2 Y`^i UwCk(IB QQ/GtQ g14yHh9d18R.ll l"t&O_qlſW#x1Yk9_^߅|6gW !WZ}y8mx5#;l{}cTCȢJ/Gh"7_,1#l#[!{5p9dAmC؀{fX^yyi5OxQr07BboC=YP aLŸRSʼn Rɔ} !4OMw7q$zmݿq2W~Y]z\$-ᄑR}3~dͮS0A~pYfǏ"ww66ߙi1#[h[(%|τSya@me/z䖋j"e,Y0|c'If߇,tum !\Fygs寇벍kG(xdcJif.ֹQ~*Y.ٻm`x7>:ܕ)h; cK2Nq\o%;H8+eY^vWAl:iXtP,NlQWa"sfn0(#-qgh;oP)Zark~wu[yJ#+md(hhtmG`4_\Ç5a*gύL_ѐ~-HY{$Z&C>wfz}؋Q)?dOtu"gM>EyAsxtQ,s>[ǘ_\tlTGeBQ/w,|s] y %~m&~-?P5zs4v I!TI,l.7(}a=FЦkK4EdоZ|V5 3vF9,F,0䌙trU%:@ @Acpژ\kcrmLɵ1YWژ\kc|cr1C&@ӣ>)!=3B:#sT1`(#i%k7, Pj哪&KlL>A6qh*gj +>M Z4$9ѣVѧHJgk;~VU31Djfj6-7$@H_%x!S ]G0r|J"%leoy_rיdmcm7:dyh\\eyV%K5-$be./mܵVa[g2=lbf)=\.\_薛fm]ZT(#2jA5:xՏak:;W`va|po$ZCpHBcU$aE ~q ty9 E7}\AB|Buj!z!3Q8ͷ(B|X~n%nJ^Iaz ǂ3TޖnwۡLF /*rWo@}s2ȭU;tf6jɘ_-!sQ)P(ql| 7P9 E`(G²ctp%Y[99'q|S8\C卑 Ѻo*/4qa^˻JfgUҲv&jYʊ:Hx.ޞꪇ-m^G8˫6?\QM5pL#aBn?__$GUzs'IsH$_Mc6a#=+sڶO*vNx\8 E(vn9nxVsSl0;7Ǫ^j5Ý 4Z!T+''g{;l]IM!PKuߞuw_x`&`UGTvFpiyy|OTiGSvZ$.!O<+7 ż{pG%j}~E{1IvδKХ~5֌5~dY1:/1F N}0DL<0JA 1&I M1 ϳ=_R!,B)jOeY0G*ī>gRreqQbaP,|P:Tv:\xV ($cfn(C">zV9TwoRJ @MME[\qji2׊KѪH4oR_7TXi=8F_;-:O@J3~;%]DĽ{~Dh^-ӈ.  P7D4"Y%(p?f nMj/\DtóD־ߏBm=+:p  DME?CE`mzRl]P.&}JkdG9ƭ4dD+fx,ٟUjSw +_ =^˻ RK)'V2G OϽɽ im{zSk-ƍLF 8rqq Xoߞ':?{v6'fؿtErw.ŧ+h^D{$-{]b\q박)hفc47ʁ<$BpA+~7W\-uzX2|ckOR`M*IQk /ņ=\$ R^X>`^] A(5")/;e1oi\2G"*ǯ33Ҷ۞X9I7딌\ǿi^Uwm3O$sƂ?T nazW״-loil6jMVD_ݰ\CB)1?bi & ~ FǕ. v#e]d,s#цHL7(⡲q-}z%)f'X4(3jjcZhڼTWlzi{r}nsx'{ӢUOZjzFyLo( YYi**'o)eH?E uGRǖ.,tMKGlD X8YJm> K+L8486㒜 Yi!QOB2=ҕ E[6b<12)R6Ĭoc@j,i7tO`(*Gl2:X=tm "LShj,RõP@`@)Y<Q0  ( J`BrJeTMɋzz}ºo_煣÷C423'Yg^>I+mjFqu XDp3\3f /S'xtKS6U]0Cc5HLJ4\Ϧrys@j;G=4_'aJhD4 JiVj(7D7'Äoԃr8S21EuVC($JSk){И d6L!+8*iXo+l?g^7pI%2ܼEhPMSEN/PKVߪ&W7By#mdGjG ~0wr#"RqKpP0Db#i"[#[$pD>]+{_y3od<o/MD-|F-uxL=y~-βk?pM'߲ &&(d0]"8G& y`zY&1u|B8v,ظ6ga^ך~ך5}BBpnƃ D^Hᘡ a'>0wEB'آ)I2\v=󧙂:8K,sQZ=&m0QUgo/6+آGam';49aNop4fBo4g?YSq*]kmVi}}'Z^?Qʩ</ Wr|O?hqq2X-XZGc ]#U~4,0xMCp=3%ZqN?qsf3xkZxp[+H !Ĭeq)>N]o4G˱WWX*.֋[˼[ˁ8%I!o7z Y [[rΖ2%AW}8̡܇gbGn`ۡeqʹCT*|PE@ERr î<[_Wb`!5]uVoޙ$rD[ W yEO]׷x!B.2\#r5-ga!ٻ6neso~#IFhc\[5!W՛+hh%.9p83)a!1:iTȃ<y%%|tmd aaC/GF߻s 6%yq25m qۻw^a*BX4u>>6 c:;`܃f<ү.y7YOY6Wr;k6&VVN_ٔڀ7^R71gM%:3a{xg3&07q1_ gP~IG; iu@VYSS\LXYM&2ADN KA,QhAPoʚĂΜթ&=mVKr Kt,\lhGƎoʼ;Wn]; +nPz$$D*K"0xE& PH$rl2]XoݎUHh64l\4[EPPNn ޼_ގ <[8N鱺h-D;˫ٖW"/f(<0H<ﱚpGq݇К=%aZu_;u+쵒n'Ͻvyuk+MVlc)zŋOҹ3<Zɞչ%d! yb(4 <::WU{w\+r8j6?CUV{\h]Yvb ;ig.w7s|b<kژܼ#wүB&+|dș½{~ Ok=fXiΏYaWɨN*=jPe TrT^s0^C.9.n:u㻓^'w' ;9ixӝi_=L_5v.=52dW. )w#ҵd1G5J>(WQ?G%Ҕ_Ժ.-8Z N*` ;ǏvZi v?,Pb8Cu^0s7,m7hTPEԠsT"jFLM&!YpHI)bMŐD r9{X/T"B(A4 mHKHD" 7,BE:bI>WLk)Fi>daAf.mqk7S=tJms>7qik펚m%J0NYI(FAqTѥjVmsaBDQ7 PPD$8a1(X2 CGlݑJ*QJTTj)%GTr@WڃLՀY"j"sd^|ƅ{3Aۨhd(9'F(Wðm oV;E on[aa|ew xf38Lk`кܻ2K`>8K'A b c`]XN%{1%wL)T͔?9C9[c\GuaؒKQfcnF ͻ,_.@Lo'?~gdCme>9GVP<ެ؝d"%ߧs oOSÚ:O(%wocS}j(3T,{r[e@bn ʓê*uG3.mQn6MX>sr!ZQARa2i|œC0GK?؋G.+i^ؚƛc69SucoG c?^&;^Yg uN8WTpj)XEҥ廛iN`V}\Uĕ\JVČ祕R:j= p^4F*SG+msx9Ϯ}ogp'|ܟyrpv7Y,vvHJ%X!&h))4Z$2IT >Iќa\<{P渁=+>1Q ]S*;11 ԉv' Ns"TpL/OɤHjYXf4XEZ4002QrMhԉO+I5c"09}/^,O[o{%1KfbO5m3uš5m7&4WEnY(Ԟ}}1]7]8)̗E71d$w~llۮ םcr϶f䙠GV;'m,=[Q|t$d 9d9y`#sqJm,`}S1xY{r c)yE`"A#pp *d2{yzJ$ӑL@^4jURD3̤ai6Z ^^Lb<(0kR#l  "$$S,VJO/b>ml9F |;3^,d@>)V(ʎxҋWOO\6YaTnSCkxڌҰ"H(Ikk-| |W1'w9Hy?3Lʛ8ɐfN-wÌVǾL w($X=: H߼`cNSJFh"C*HS=gOKcm44, V $iDӈ65,jZP`KVj4ٖ˪Ό>l3E/%5.k>s{?JP)-9w_koJ{a<|3 3I]U|ؑ1`Tr"|U %8+];WQRU ~<Ӽ2qiy)7gP9aMݧ[i 5m3RYx 8BXc/ob.z(u71+=n 71C't<\M!&f?C}QE4܅H՞$/=E{.*XٴThQ_B;آ^@5*R CkkJINuH.֥<M25Q&J&$ @G0և\LeEyܷ_dyJ(l'% Wv{,YJ}/ۿ~/:j͂5|xvbc+1a\'2^5?3 CdiLϛQgغ770+Y3RQ, ߜ0ܑ0"md_Dz0St?LG7,|tMsΞmIϏA \\٨h 7S_ k/[U)' +DS%#WBWww/\__OWT::6ֈU_ONhN>5Z}h3 0,$>|X%2Nn3/ os4q\lxo~e *:juvIafiz̞l-ɧd@ 4aA$@T: DP 07I`! L1%P($LKEu8Q6jfmsq`D뜃*Q dBI' Nbc5̴0+=^796؝׺؂jO]f&m"^\@kbs;gk^J G YStlj`xQ$'XK5ڬܳ 0I9J倐' x98V`TKZ&:|1xߓzRӐ' |W `sg˱[CV3uzYCWHUL$~ᒸJNUHdhmgV uV%l92jvIօ HE2;X$(!M pɒr&" 1cLEl :1[`&(k̩,)xtIxx?IJւȹz!CfσM`R&@fPEOU]w4$I\CQ/ F[3W7no)d6ar7Y||u ޜva9ѥQ^޵Bt?.gi߫nf߂r<%bGn[G?[fM|7 | ѡ93|Monx#>:}gɆgr0ЊEḂ32Z"WgZJVٚ7U7f19;0tʯ$K|ڑ#8'S9r]c C,129`}!t¶,ERKɵZ{Dja27Ă3HbpMJ;gmG7Z,8M7- m!Ɠ_o<McS2t:9Ձ.fBAP ](T;[58D f{% Q9)iA4@ƚ~mݮnuAoꪘ{[nnQ ZU]jD5;OC `cKER$<u]5F1Ei)'9*%/J@=`̜%)yP454Ntc {%mE 4Jg[gu>^l0c\a6^#!25vJjKUh7J14Y(fS4aN}CdH`N! |ݺ]IΤQ,yj[v7f֞ܰB$ h8V媍*gsd]zBMQр{yF5b θפ@yNqhrCr kSエBAEerxmI 9GSЗTތ`D;~MaКLqX^);E{ZwS"s̱#>TkݼkV5cѹb$˰ _"t.މ(4KMH{=]uu?ӍSp=99!l4-MlD̎* FtBJ|NBbNQZH+kJp:XJxb!Ie$؉k.ZWoH7yoQ7B7aXꎕ[ٍX9 F@֜5j dZICYv1e*j.J#ԛC{^},O8a ߣt/xRXrHMq[?~wc{%ϐ3@Z-cgR~挰OnIٙae,FC 55 #.{YxQfKUX3li~}Y)Z^y db5DlOa_~ C IGn(wN2_} au{kQ>NY2e[})!Pmn8( )(_e)=%bÀtE̘D&BTF!{ȑ"EZvVtђĚf9;\n:"Υ%D7 9k8eN4_ײD)\^]t9?p['BqϭoPUY Ǝ,PLP Q: K,"Mt("+)TkUn[L[^^mrPًl|q58΄ݎJBHD(SsU罭G0g+O 3I+Agb&Gd_dt3q.14fR ^[_Y1ZĞB޾Ae腜`5Ior^3&0$Q\*~Du -1BgOI)m VLjJjZ۲t[EM }+>vyneح YM[xHgY[M3$-}cYCB9X #>kb4!@Fn [=:eǤ=}V`Ax4"+95d@SzMVTl+昳2&0iʛu g/uQgeJ5FA_Wtc csC:#[}4sqz+xί}oʴ~Eopb{8:d0uӛ9ΓҴu;/fm#xV8I8Zn [<|7H)S[`(-ցlof=H^qݡ;#C8P! -- =庙N[7GR#1- o=ľvw~PdsoI^D5]Xy _4noPޥB%L+v ئo {;aQcd`0w>;P%?s|*pQRBajvylO?7}o' Q˞$:l.õ?&;v+<#_n(V} ȩ:&Ƿs/$Ags rZY88?;:k=/A.cZ ģE)ʞsxQƿWpt807+OE3~ʢmߍx& yVNnXLru߿LJ^jF{.)yr;ežLy`v];9L?_BvfFk4d-x`3"v=8 oPuP6Q!AL|U55*cޮύB*}^ :t}'*fScEvIW6vgdLzNVZ;ۓh-uJ֒fSY|yκnZ[ZV&i3ϰJ ,7.F# }c7nQ8+BA/r )xz~4/en`EbQ_!9e_sV!H1YT^Zu&Xlr.٭Jd&9ffԙPSݻrWsqc(>Ug&~cžw9J){\rz[̟/W3uh!w)əepB(YyČ/?H{oZaL֫Dvrn7;p}zۋ[^'F#V7Lk32e,V@S.v{l~*'w}r<#O>thKjt]KH K%RY9Jjΐ.Rs^VNy<?$_Va #6{Z6|/rz6mƘyρ3u&\||Ĥ~\&9&rqkO]ӣjw?7_?/O3$ [%]Ǔ&g=z'ȁbU9kt%BN1'tcjENC+1rhV2x.D'jE*ǖaՌ? OI8hlHz rq'ϓ9f3^L8'8{\ ^bKZY8DW?^Awng#'_ wX0gښ_a%P*?IڜLrK.mH#ɞIo,f")K6f_7t71 yŬc>nb'jsS_.^?HQ# 54p1#Mc#-ݯ/e;2[ZHn/H<#|PḦ@-4nNQ=B-.(M;gRcM(-?wܠ2\4tDiUHgcM>3E sQ _T[iy%$@uM~2ӹN-4,jNgG4]Qz1T<B3Q|pμYJ`XI.[% I({u<q_-cKۻO4TD}74OZRAv=h=,NZ$niY"k{a<aߙ( zu&!ξm'Ęu49>s9K '"%'i3kRJ$3b(sRCd=ﷳu?#5}rA x{NZa++u=ET- .Kqo~(Ű$G$0C .pƲPry6HVry6զͳB<`SDnγ]^SC*O Ģg\)UI1WBQ)" , k LD0Oh%K%1X 15md*\^|DP90ː 8 I(*Ii,!뚥s~z b a6e&9de63jlKu*` :M15#=l1=je % r# \eU%;Wӱ mP )h ^BJ.1M9?|x? eF-`&({h B^P 'ә=t?÷3PWzh88gL-m%\ 7 I4lǟE3h`0M߰1HE_|,=ލREk `Nm*w}AB E[b[ޟhn;v( 0CY5lUn NXWм m#v~V1QF:ojUձbJBVi= SBm\y0l vkܡ!B)zS(Dh)δBaԢyc!f{SۯSf^Of=\yc'鸼I )1iVqq$z;VK *բ"l*\Q}ՖwcuWjYTwbEQ0Gw1H?@%QVљjJCkt4w!{"OVD+a0m_YzO#3GC?ɄE sR'مj1R]QG2QȞI܎\m҂Iv=60;ux !<m|X1fDB_bTB"a*۷.D XS1%L4X`ńgԧ\[1܅faO6W]I4(ūϧ&L1*+:O4S;[ePDmi5 B¨Y*a49H愜HWjog5 &V˸4!,{z~-9K4bmun+ TI=J1~u*u# Bq%֮a:]S-`4(FRK^@UV {3Ɣ_9b苧,n2sMr\.yy K@1^`CD">Q`d,J<1L}:p D'g5"O~Y(]جf`I}mײ³'rw?3U&[_9.-TI‘OS6KVKuI^F:jߖ*r[<X+V0gSϘ ae>6u*E*.%2; X$ r: ͪ)FWgxV& +***ed ͔ŗA)$wZַuŅVU ]S^ƻEВHGZ(C.R MK Hj)+N5:Kik,x_ة|Jӌ@ ?%9pb5[=Z•kT+Fz1)FøHlbP 4F -R.[uR)^.?-Di{hFak)j%m96`<"[ :t k}*b nT B*fkjKsņN$fu@gK586 73+uXJ㭯U+|AeCLjV={ kI 6/"Z$@G`&Nk&UϦяMSOT7Ypyeh\C5s'HЍ$1=uaJn̪I"LY* $b%ߟt-ẺkbH@ʙ|ELs<9`m yB+W*RDlG)_t5 }[%*ɳeL J)ֈ^D&*X#2STRfWaI/&+/i:1]jOut}B* Z뻺SS'_떥YĖȋT~nQ#&9+!u4J,M\dm>V*]cs`L,56*l,F V %s=&2܉d}tT6lLHY=8ͼe"0!|#8J&$ϱuLCЁt\Eg"DcBxhgEG#@`qEu^fQ7̠rxԬ^ޮ't s{ VUYoaAɏroCQiGt@X@G2ݾKgv=evQt '+6=#Z5U7;;7RdFt=Aw`$g`&dv˴a8-)|,6`|vy[elBs<gcAo=I0Mx{>9(RmYv}(ڎfVJ:Sƙ.C{ZZ,ELB1>~nˉ>`Ap/'iJq@Pz[1`/Ǡr#p ,K7T*B1n=z~TmK3uvS 9eQ!jOռSJJM}eaiR89SHzE o`/^҂ofv_.'LxSQvF㤤jo"S9rR^x& ޮz -߅Y*qKza'K=?|fW ?|M7wS|{– ^XU_WWKXɏߞ_}Wqr7 "nw%:kq8b{w[:/Pʤ<;qo _puÇՆKSp 龖OxLQ쎢ۢ{zJ{|PV Xzܫ!9,٩9)8כ #Tt~>,Đ=Cm=! !.BY`s !KkDx'T(ErL CYr'&m%XG&@dFVP. ʩ h0uiiy !/'WK}IQݐ)Ln ai7ɻ{} ΧR٩HKc&{tkD3m-)䜦EP**-JG'Y6N-Rǘl e ֕T>gGgΙ$S.?L\{%={ڰI3d!F#4y2 d9PuA*Ȩ$oMd3˚6ɘp$psX9>^̯+ WT'n!)=:poa+G>o)&o͇ }Ds^^jMy-NvA?^\⺾$2#NUN. C~ZYR=ϖBudjpa_|X ǫ+lk<"Rikg٩2nûkP+B+,'N>ʫ<_WHoe]+vq<]p77xBߛW9>.gzgKM&s}FVJY ?k/F CI^'%&F&iK% &AZ 5ZEY xks_^7ydPЉ^<9Tw>O|6R(yr.&|N>y<iB-g}eodz֙$VS)7]ܦN%vo-*SKwshW%dٻYZlo9@/w.g·(I&Љ_~NfoVHX>&z[W^D(v lC ʇ?(;(`suoн@ĉC}EV- u\w|敷QKrrnjv5hbi7%2}~v-h2fr'oo. "W~O.^lUـ5LcՍg*1lЫ4oΛF3OX*ELksĵ: $3Mr|kr6i8ed$ F9IBD&X'=X;bbL#m6z#fk*iY^9fHm+ +*8^fv~;d#%T;m /kT5ϟևvXO1 {zJ) jc/pTiI62B9Ho7RhXCp)X|\g1:rnZzq r-%3#0y׮I;_DȼHSxx_Ǔ43\NZ=A+˔QzbE*M5`6t9ܻqMA4^Hm %g΀ aG7 d><@v (!A6s%Mw rDT0R Zp2KR-ju"dܺP#@ЉtsIyM0KSƌxؕEtfS3 )љM;:F1:B%XiT'g9`@C(3) CЌ)!j!Bel t^!6WgI;d#i\$NH"%b$3D,a $-W#rudyFXƔ x6JZj)i \`ԙE%X`t l,%kO7]k%*i:־|^FkS}ʛ c?tmNemHS-IWǞg-;TLJLh)#dv)cl>Z,PjEf=w@A(Mj,~RƒS"qhXGd=(dT̉kɎ 1--D%/i9Q#QqY+dټw+BIaln=|RC 4O8q|n+_fiVZ[պgS<;p c7FFaCUu522(woG}Kk8\ȍl׊. Vץ{zQJeQںw.}=Eaw?P,?s?V+wwUG=j/ēZsrO:I&MB=LwNż{ L='v7 a']Ɇ رt`FOShRpUVVn<8~]CqJg6泒|V2ڙ풩>_*+QFHhu2,^!24h* %SN4dɷ>.8fW6 V.K y%G JAE6yQD@d2z a/U *(o|}R oٹT *{|e^=1H9j;M+i3cx_Ù!fTڃϼڲrȂL Q7܄F=L#Ys[!6~ǸTTHVɣj1d HX΂1<*tV& s ~-WQ}-< r$D`ZqhO>giAry!DrROcKz>>)"E drR H +uAʁC6 ʶdMO^ U7^ /3~CAp {p>.sZfBL6#/Y xƬke7ؕXQ:0]skҢUpJ&ѕ.|3nsMɆЀ>h0-Gq',[QdvG짴X]|ټiU msi~3`(ŹƋzVGaAYMp'nHZm}S hצ:؆L IJuMj]2jיּ<)Pr\B=: *;òDFFK{+SS^E5saPt/~wmMnX%U~=[[5SNIeS] v+=Nf*}tk6EI ARW~ S]$8t|ug_mLS*g@s:W>Ya1a?]#K^z(~hu-X2qͺO'jօ<^aBZ+_*GƦ'#0B.ftSnMUv'p$?ܟM<Ϧn;h+Z F톘Z vꌿ8zX<ޯ!6IGsYNWOG\A)'^m|q}o#}*Y6#IG79jB"!!_a\oM9/.ntxͳ?6h2}|XBG38BȰ}\/riE"=Iyұ`+bB&G|pJ"T|iBmץVG18F,xًͭW$ŴS)]ha_K5XC+Qܩ1[6w~x\6UQ@}ҵ {!k -܃x o re;Ĉ9!!k <&ޢlo5Z#V9e!#3FEGD#ܰT[Sƽ֣(ldԦ0,S ƃPB62kS2qKaP!S(q1W͍@Mcu7 (Ths"Mn\QZ[9͍`F7{WQ*d_ !cko b\"5 I޿_u('m=׽b<8qt ٔ]j6>.Iӛ.!_fN#{=z*/*/ :IN򛚪>uVu7 0YW3qg H94҅)}lҠTi-c ȕ}*&bGP]Z@#px׻Z.$}r;CQfn%ϓ|MENt2I8[qjs{w&,̝KP!t"䍕VJ }cA ]c|⇧ymnG,&$rG M?o.^tk( ?o/|T5݉t>UG8wK(WNOauoZۑ_u!֭0.DW/|#r2Q"=Xy0O~~hOc6LaţOqX~;ztXiӡGafKw]搷;R v݅ݧ. ) r[f.ɬJI` ;l3𜁗:-&,5 # `ۛ77~O FVO77FtT aR0DPØL-1phj Jv`Q*ֵaZs'Q x `& ~aS+1Nh_) "4R8 jiEÚ#&/٢ս{Z&3odŅ. LT NX剢$VI%,ej-I3UnRDv<_ftL+BTŰ-)2Ȉ8B`aCL4IU5"X8MuH ~vYI&Sx 2IT_GL !z~>=NͳoW [bwY>n̶mΖ1eŚ{ #p J ޓv nuH.ׯy=277%@(&j+*=M`W=ȗK!1VK7Y[::1EUHY}\Œ)aO9ˏ}!wKz  }6CKwVYa5-| lQAO ؏saa}ࡴiЇKIiGu-iSJiIҶ~WS3 Eʖ1E|R^ ^iwBpR %sp+dNjx#|1 9RB+ă`5,z)xLj2GeD0ZZױBiˡө L3=3N\KȑyƊkXJ('GUk  \$~,t^c1DJ)҂:݈+d͵<[EABhHsicvԥ1,` $F[dl`Nc ^b,PQ pbZ9 kBRJϰ5eۛTpd =z|[o1a6zH}mW7Bo5kcΎ^o$MR_@`aWl`fOn1_V3.;%Zk DmAifGp! W:zEa-ډ&0T,}.jd}2d4O`HXXҋ7=yL:g毉f|U>gM9j$ "7ʤlͦTZ. dW~HH]~f(8z̜+\]`.x!HzzNwx25B^\-s3)i_WP앐_SdL(h+՞`Daiϋb/|\ ͎}?DV`ϏF4k:l#τM::r"@֣OkD//?kT&4!Y&JP eDX&U߷isPPF!TF] C:^s4@!boS[RwttwkW\Y۞u4z щLRJ0r`|x} .W%\ t NX2prh[[r\,-mR),?'ޚN&IR;7~+rʨpb0#F)Y!T}P+吗WZShKA_{+p%#߸,%,6F]MoZ`UN=;sx(W/,T1-n".T?DGʸ30O/kB^k 诛F)IM^5_W.x9dn$r$]#a>F}C<$"rsqŸ&G04RYaHUn3m, rj?k5W>Y*duz@=-|y#77G{W~4o,Ϛr}"gK%>mVa+ͪNBRV y Ú.VH O-CDZ%jwn}Xܭ-#8S\aA3:#`Be+AJέq:6#!K^e Z*Yu=pu3ձe*&j$R0XPBFL-c Dz^[TEJ ð>6UĐuU dAd0X魐qFV ('KL#u2,{e6(27~)Os=4Ǝ6"'exžvNmyBC]q 91A?9Ȏ+\崗S`f+vҚIû$A1DC$lM(\׭cha %Z}3~>!ï&$!~f ض@ Dv$=Kn s^^g`? oWII*}qu}D*0uL뫗7VE54G/C:RێfBn;+u ?_,aIry- ,TWn"TW?'އcp$S8%RݭH(Q!rq (Jrg=*TEٜC$ _mR` 'sWfĕU#*9q5*/JSϓ_@wl_>S5گ4 UKLa*r he 3Ĭ 5[@fcFEm%S8l+,sEZZA¨Z𧏃O胧5R~[GJd@,\!E7LDOB8E3~8T($br8Cj?鳒ӆd ʒAYp8x-IF l/#.L}I%dru44U[*ZO(ʠ@U,'ՌUI"QDvŰ\)\3OtecT~:' ӏ'T*mڗ'ބ[tI0 _] {EYغPEvf|7bdtLcq-ItiL:O*nOH,|~ye*)J4,0jJfP\I/3ƸwX VN;Q3E޸Ϭ ƊwM1M~P[lKjݱ2UnIUl,dz<{?[ &zӻ?I<>ٳa<`V6S &d؈U/k˳@l g n鷦v¨C3Du8)17]=l&ILs @G9bR6uo3۶1)sJFS/#K)jS%ua2X]3IXܼ}WI:tzݻM]&l7~rkh'+

d|6= ǫwLA'>|.~L O&MA ou*ȀKK^ ']krs\~2Y$c\HK,\ku6o[1]ڦ![)V5eQ[z3'aUe?rXnS :>)K$5N7&T$ &V"Ϝ&F#̢ 6I7Ӥ5 =W $ʐ* yOЪ YSXW99)|HHn1ʐHJbѱc"u#ᔍ0(<غj[)D}z0f6lNGDFvnmEi;;HpGXbc[6"Al#5R`E )+$Ť x }Vch"z[E%X 0&,6KM#9#Pkp=Ǒ0dN""#k9Qst2%C$gŏBQM]=ll &YҺ X˃A~b~@Nʭ\'; [c5sIX;bJaJazxW)|^pr2aJ89G5gd-Uzi&OQ qD+QXKcq8UXAqxm-+O1G=4nlp)ǻHepAk )&q{ELU>ͨ\ Xx޺|6 N4u1Y:N֜g+sWo?NG#)#=~̆MU`\rd`*B;tx q_\]_5[VȮ@RY˛_Kb>.N*CDamT5+A7g%5)LH7Ry0?Hu"oμBH#8<VXkGF̑$s2Y 2V0LQ\Q6^c̀M9$XxN#);jpQ,mUCtEģL`Č6Nr=cKm QZpq5F!WlWm,#K0SKjl;"B=;"FPZ"SXm0^0ˀ-tli`/%&o!{Z\%Ig?#fQ#%6SD5ֲ!ሑXO& s_VLė}bk^`k[Vv!nG`!AdPnɷv[7v ݆ngYJGm$ P=ec쑊JjicJ@ TLw mFW/fN7`%\!+DdpIi1ɠb(M|5yͬpeUE|t1Kǐ2D E@9'a"uTXr8 afcRw167 &'+m#Ib0[<## Sg0.1= !OYL%houHbUEFw"Ό0ulJ16C`mCtId_Y9?>`Fw1[{ }̔g!>(h؎ "(E)`$TZ+oZg;]ZV.j¶\k*`e:Yc 99S,gjRg# :].d/*N@K|.ENǯz⯮Ϧm=Cr9$׆Unvw{r[UO<薐e/B[H1J ;r=%Sens[adSnJnlJy-9.9R#BcL&@ѹ"8ðܰbM &k1&fdZ [go>g!bv(1RGfZ ?kIQB6 {Cm{̂y~ʯח岇+lq̮q]헭9fႍ1E- MyYT6^ 6fWv#``sKˠD΃m!J( lL-Ɇ2EO>Ьf^ŀ!du3ƤM) LiV=UdN5S>N {Bz}e^bNU ; Hƍu8 n7u^:p*ڔ"Fǒ\r㤣&I[zػ#g6dׯFwqtd9^J[`{j쏒1O+ 2tqnQFn-H G0r$ӝU]8K= *KI Jʔ{V*4+#2}ӡVK5l {|UE@ȡvO5%Nz ϧ)_64PFn.>ivpNAmj8=2o*4lnXWA+BeY1ǝj-+f]-G &vGgPGb5>{*YU|gz GޛgBk#<3S:c!9ic-EkQa.шhukkLe_B(t5*6ȶ4d=4EYI&%JkU{Fhp~xX0vut*&^\6쮲;t#Z-M(Ԇ)sOv 6Vvg/g=L:}Y1+Vd8u[ijHhzHpLTA&K bTXo붧6 ~8!ϊ07@jc,j B{]p;"Z3c":X0$Q o|>Oj]Vq҃qWg"vTͅyX[~R߅䀼17*F3$ yp>ͩ ,+yQn+S(>!GШLfw4Yκ̰&COL-iWDqJإtÊEg؃#y *c,ϵ1ܨ4df >~Cvdyղ^8ʵLd\&9zaۺMl`{k1 қhtHHptyCnB%m7:jIy5GF XBsp &ڵ2: B| gs %c4^䶧\h۔vv;]!e2ysp/)k}nX9SLjhCB'${ϻ$nux-l}Iu*{$Ã-ZJ =Fչo;Qlh?mMCaH16{IxCOU糗&w_%4?^7[Fx5ǖgmh݇^(vw!K&kbR/o_|z;8iE82}V&CPN.ϧWϞ $SD<[`q:ڍ-Nu`:/'h[$R/&u|s>ԝ(R"ܞӬSVP\G frt^uHIѥA'-}F#OۛTg\'Gh9Z;WE8Vp`PYKHRڨl*F͞"CA8ϯ3_tyɒY$%MNr뵿x51pt9Mlq=sr:w&|~zrzO5;# '"ǺEZF$u^X],O.mnˮ0: #hK)zUNHC&D(+H$=ja;Q/a;֬d]`$$RZVP ִ#fgn pD`߷G}:RBXQ4yQ7>iL*؄r:% ꕁ C"am>*ȬXY k!&!}̳C->IM1~P:Զ %R\.12 3cR|\d}4 ɕ<5mkR` )o b&RA9*k2)Q ؍2#$)*&A) }5oB1%YE(%4j1e(omc}Stn-39^7M.7%VxXf+ȸTɩc$](K2Ycch) JFVJņ?']Fj)9UdvXHX<GGKYR B1dPLJ7e幽<1ҊN2ӟߕ8mKs9Ns{D>Yy0)T&`Q"XSV`} !-Xػ>;QsoT@FU2}jj,iD92.ޘbq=eHuaᓉGA0hAj5~T/"l9LIK3P"zO 9 %ejF2R~ Ł\=͋`oz0)هqjR VBa=,d BV`e!G0LI%-0+dGJ!!W mQ 5k3ֺ;}F$Ab 8ЏN! RH&eIsdd=  ﲓ1%hmzs/^4/;>l iC UDDްL0 ѳd]{54@73JFc7ixԬ͒zYJ?6&fMob5ctp}^ ߾gHګ{!(Xz/_<]"yX?^˛ ʗ7(}'NKX/Χ7?^Ë ?# Q[36 P8+;'ew]\CzYڅ(Ǵ䚫G{ܧKݪ>SM>-hcJHO[Ʊ 8<̳+E{֦׃OJrW,Хw Hr3)h&ճ?׻ Ap }UWwmLQфqî6Oܦ3vggw /6T7׏z>3׻O_;ݷTHܲͤ2 [*I<6RԜw]޵62iie*O Ș񈥻.LQţKeRF  /U9p߶& 6|9yt`Zhv/L&5x%A.1 _ܮ&oR'}Qܯy sҢ{\JKj=Q^!gLLЂ{ٔz >EsR.J1t߾}'60>(A<rAYG3!%5x%B.19/Jxȫ[={ɮzZ!o\țDžZP|c«JhΟ}*^b'ZK h]ej^b+ ygp ;9,) QxJQi5~; OBdQɛmM_.^M!AGa'?5~)vz56tgwgbƓf;b::bfцQUތn[G$Qζ8F;py3m}X-t*wݶc Fb(ݶVQVGdM1i}˧ZoT6mn!h5$>IlJz<^>j4:Y?+5ߘr&DE}i^Sc;Akzϥؠ;8Qu #7D!|WjPHη!{*Q AcN]o #B9iPH}ER>}O4(87jπ(]ovUcx I>mD ;m䕳֧)gzݷz Y-+1}h:#>GS>y瓞DHZ 2IK Rt[qpe%Rm4@ PR.!hvݕيoy%[J5#څ(F.r'Y9792pnQB&ȕ`` 4))™`^<s(k\ ;nI?jMeMFcN`V3PL0ac*R0MZH ؠN)'ŷ "el<,R^"rEBBQ90!LkBfkZ2h mexÖ֮rKcOj A+x4_nzU'0-HM!TH6\(e9ap2Ϙd&T*cT2#]ss)bodE4<~f F# 7)p)PH@~;p@B+GR{Cy_wgcd1)!"n]$!( Nj2X"hX,Asv!}BG*w8RjZPu\u($[*Pz#0W" TE+rR`~!%$!cb++DO\)guؕp]iIGsX:r]XQ݊2"4c6/_)cEf~.%_XrC :ַ0o` ]rB$u{n)@1i"IjxNX 7/o޳r<|}-u`$Ͳf1̰Q"S0ˠ&ϴ"DXLV7M0F 6X%YntYvt9ƙuzw` gbRXX2Eis#2Gluz{S#YC ] iD aYh|\q BInB[yxґW~ǀ2hẃ;"9xq^w=0 Qw̹~> Ux/3wr:&]ٝ!kbZ\2m7'~tV;pGW"{e^0(J :5B½(Ï(K5*Q'E~i⍨Q7FU~(P1F FRR (KW7FR"ݫaDUC0ƽ($*8T."˯t]<kH|y |}d02kg*Z|g<+HS9+3`f⫻g|#?mRb~YQ?TMT`Yi`2:)rp62p'5xJلf)60 7fꑰ^\=m5фU%M˙ l NΆ-qc$TbEurH8PV@RSGf>l& ²P,UdvIE%I}>DS5C{@ +ƆLBce4Sng\P)Z686S&Ϙ4+vi7u1 wgorAP݋ Ae A~յ$=,_Y/CGR TE.x!"w%וH&zM:Y(ucKr>MPW"3{N\qM Dx]IZ|h4nhǑv?^cA1* OgP̄tԻ#EOS0:]0K)JW6G.~q0|w?o?wYHl>[EOO¥qyU"=^{p* ՗nzAw%"4Xs s*Sb$JX,%˭adz0YشCCk{[}Ɉo`gԄ K(OoeҨ̈́4i^4CYĺbEpufQJSX%0itQ l11i2n&;',]f42!ɥڥnB2Q3a2)IrPC᫶SY`IuiՔYjř*U3n%up lwTj!q ;{o#I ^":@=n_DJN=T |u_)wDh1 ?}: FX9P8>xtJJz 6z^H.%Gmny62=+ɴ8 / "0I.r$2e(,3NTW,SG%a#Re839H1ǴKߧL~a!Rp q,@J_ӈOMܜѨr˨#vx$|DC#X!:ɞ%=++nydd]z V'XUPg!|7 C~R2tN/}(Hn[758K,g+MUJ1>LEcC1qjLDi(rҋ4J }#MmGLDDWH=5DUSh4m-;3y|{_GN+M7 :.3AøE9q1fxm*{4&{Hy63aOF%E#Ft44S"2k@e \ Euj XVWn7'g]99oMP5P3yөoP䁾;o' ę&8 vtr8a{jH0$gkm8tpczZ6ߔHlBKP_jՉd8Di)WiWDi1Wg_W Χ5Pkz\ al(b!uv+!4{Ojx>(#N)XcNW}S| &b1]^gzyjX]:"OUj Ma*S(9T(MȦa)+I2BmȧW$²L<މ95.Q$UT 3X 12|O&X ɥT1tzv&ۘp٥m ~rqSe•n7^C*qvewNR-挼JH~t[.7)>zе]3,B\&w|EwxD.iR $WF qSJbB_g\^)ُ)%]l>,_Ӌe +Z̬"qQџǍ)nI/$T$X?ͻ>&`]M }w.5ߘ2Bx;u" wfJz5+5{ AO[QIc_sPR Twk^݇iխj1ugTzDsLIeNuw*Dqʎuk;+fM &~'1h>ny֧D]dӸM6=&bqD<ҪaX|#@GX5p4iQ'{m"!(%fi7@5^._x4 ̣GR?G2P(q5++O /n<{EW,_/|/O\x5*+=ǵAt:c1fdFVQf-Ŏ; [+bIIq ׄqexy.%ke4,Yڨt!%\l_E*ADxC9X3@a ۰a,% Ʋ-|v"̗hpb4.wӥ@~Z/W*gK|_l8oR6r8gpmP y"bUԜ#*hVuR&z$ ƆMn,3c 1s)bINAcxKL'*\@&%ΟxשP _ύcRoʋBjq/ :8a[!/t0]~bnF }9 ?]/߮tTEuXj3VLSqUR010<) \}_ۼ9š_@_HBJ\'3Z2%GǦvʴjĈN5hAMk/RHșh*n&&./ϴ&&$ qq}1OH#_\AN(J8 ȔP<#^ʅkN91Bk6*nqlrG4p Qk VQow2kiӌ(tj÷B$hqNO>m|YNZ i@2QM9ޱF!gT4SJuŖш骣ŧl§zVG/HU] q~S&}\\ |@3'`r~,SocU,Ъ"H$mmjA26m4BXUl .Ձ5N'[!]6x 7P@[/͇G(7E [Rzn)jЉ ҹT'7uF~XFrr2Owԡ(ڨx֡H˝i]#*YB2Н4PmMOMF[߆?".6_֜; l`3ϹݞW)G?'f)9PUoJN<ljʃ 2GI dj }J騄n߃uz@ i$s-C$8t>W_! ?ćg)xȭo9\C"7(QGkhX5b po[ qxYEվfܒ\JdHhŁL@9tIkT>f*$cC&Xn0&ިs@xȋ1Z%غ iLHc`;\d!1;yxU .e3 OP{㊮)|ٗ>? ?|ZW!UcDh?Ÿ 2Ѽ yd <܏_hSpkX͍})eic]fiB=z3=<'Yj  $ ( 4I _ d:9Mn(Wd&SİL G,{mcتSoGN ƅ՚!0P`:iAB 0L2O7 ofW9%cxh.|*ld1~:ˊKTxޱ(&l _s❥(vXwLvC[d{IxHz?N`Ek͸%Jgfi|ltO\w`nM633~*qj^Ħ Ckh0 1&8@8鸐sS`މNB"jv0B R "=O^h%9LLJUܵ+?Ek64y`/4Nz'WJ@݇[;7^F&"j:T'\ZcE1k/ey{=)j$ah:Gh! }AJՅr:o=7yM!!A*&Id!-4-ӾF$] $ރg&zZj/qsFJt' v4ңJy9VЄx!uIþsvSUj=fO<lIO^^: ZsILFd@T1齪0{T(ϙIv&)@}?9\LzpOY=VvQM[yZqbfݽ9jݟ B%4>W@Œ }bhRZX<#`ׂLgGϵKvǜ|>QٱF_Ci }Gvq-Uw#w^էSE zxܤe<~Bhann|vG%P }w0$_"Lw@2sh$ sұk0ЕN,Vk͉g" , \aI(!^(5%uF!IR\!ncϟa13q0Ds,!4Ų3$f mGrDJN9zq8hmž0*ߏ8GLqфiENp)jp' ٕSN;f *h_Tʞs8d3qVEa4=Drºpob%dM\:@bYWFW_ T?cH+rc*PMC9wb$Zk0b'+I%wz\s`ңA⫞7K͋_ۃ{.Ţoܽ|79x4,|8E~,7>GcC8zBp3C[!u[X,ҜqFLK'A@RsA!E_|cdɏv4L5ۋ3=<tpՂn6_bvJ~n.FNj1P\B&!J4*wpM2T&IX2j\z41DTo JR Sch*Rd%6g~6S^C: lqe G< Q)$r-oS-l.EU=oīr{5ψLD_znD[1F/=Kf(BU1"= 1:hM&5h6mA'*;un.n-]A:'̑hR#MhSb04֠Xtbk M1$1./83aΒ]A5m p嘃6$1ܯͣQRJ= A :\`kbxh۟ojJ/6bf~e.=8 (3Wo"k# ƈ2"ʭ-R9 gVQsHkclVJn ͰCrEԶFqW#L!'ęff yPz8&!A@0D0ܽјP*{85[#PSZ}"O'a@KS Q$V!=ž:I%DX@*n13$:-`ypvFpyjqӀmq3Gܛ`qԖ~4M0wq?!̈́dI\ w8V#r(NX:' .LءֽCu ){5*KOmkHO45qnp~~X_A䟞ZkmnVE}2'v4 YR))? E,8HBY`X,QiN0ҭpN%$Lo"M$/_??%UI]uo,f4 aok:;~z5}_H~Z'?l ߇%W@}us ]@5H*r&N8/\jXn`BF`y~eTlk+G<3 %JF12 c}` o}ʈ#L঄բ7|" i : ZW#Ui ㊇Zt"$mL8i&-)UУ*U M\IY +*9"\jB1.1EZ ė5xy,(UhhΫ^8R+k##HVhÐ je]G`*W)0OIͫ^A{$h`e ͷo%0Ru6nhܾL o͙66l\\x׿iȶ"3|9-VA@󟦸׼kޱϐ *yH0^<iA;&hυ$Bwܣ}Uцap;\gBK 4n%&1@|Wˡ5)VOW]Rz/ұv7geVrpJk{ {(=G `tv&p6NBƶ푲20cs ܁[變#|KP^K߿NKyPq@#+2-}s,{#0QI󎃵Z(s-QWGdިF̼ѣ0K1f}aOGck&cDp*ѸQ0ǰ;Pvo [3)%ֽAt2g\ )dڛg;*Tݗ L966" ) <+\+,oQ 8pN "Hs.SʐItnۗ?XdJЧ-CID0BikD4*de 1Q Q@B~(ł<wM4F^ 16!3"qHFV(?P01`Z1nF0f&aVɘru %O ?{~̐q"\' $q#qE j1yL@Iz BtZ^%g>XJᇔL0Yг }5Bȱ">VV=Z e[ X0[,6d ]\,$v)93WQW0^qRmEK 7e !w96Vֽ@P܎4 "CuƩݗo4_FLUi|4py65"+L Iu#knMz7~߿1H|@f/Mt rܯ1HgaE'N,W`4"rx㍽Ifn) whԽ"{9%K3F'?^9Dpv\ٳ\"h src3%8'k<#)Պb-Wx%3\b揯"Z̕"3E00tol8]uz$*:7>t6ױol ^ݰK`%BHjSy4.ysU/fB4Z 5J@2"tc :78UF|h0~Ҷ]$M;D1`*2$Tج ZX ):zcBmA\ t-ʟޗȸxy[vI@ה3)H2!MzG$d0'p# [ mRsi2L?r&#.e]H&ȝW\0HwrGa4Ӕ((Ca5iGZ3Z! TA$\02q@BI15q5V 2HV0}cyʍ{_7_jcTU.Cהܪü 4`F>$!*}`P4p\B7T>>D БR+FT7Wu[7;]$JomwٚxPI ={4WV{j h)V<[c 5aH.w5䲹QC0yn`ڳc%Rt^SX#W>c WcF xݹj[öඝ2.mH?L n7VlIl7[1}P»2f+Q!j-Z&u?İusK#Oڴ xaTdoCbt8iϏPn[$`לӒ2Wڧevϱw-,?_ZU<%vB 5dI߽f-0mvW(8:84됁/OI((4(zQ wy>SxPc(a$J1u$J?qLŔ2Aa9dKxtA)\OԐdmvܲJ܉ۥu؃VCD 0h.@RlX G3paUhYIa70O_D%1M= }D.USo/a.PYE\pA)&| LQ2zzɟ&]C 1rh| b8͸[ּ2(ˡcY$_: 1deTS@pl,6<ŗ=95Χ'z4Z{X^HTJIT gi <pkԘy*}JVWNgr370 g{4(^u'4zVs@\`Q2R򺳨=kaWړ=K"iM j&s. 7f⪘m_Cʂ\m$jOU 昋\YopPR|x_6]& ]3Ka am\j(  onizu`H՟9o4Q{g`Z cm<ޱ1|^w< ]HL %yNb Yo!L5LJ 5pxV w0G?|8w{ S"<cFӫ~=;_O\÷6?NuP7ݛxMwrEhY8t'7{RinW wy&S]~xOaROf"x~x9 Yx0)$il!`ƣg;}E) 9dd|Lrf?Fb+rX/ah{Eh|6yhoƽѬ6ܙ6gI-XQ_ÜisE/c/.iTV]D;:>M4ĬNoǝAVtw@WW8!eW^ewK GiCwd`2IU'ΠT[7~߫YZR)R7mSg`^KG3TtsŠ5ǧ/ұ;N; pA`&uw#>pO?|¡{w?`a1^'_χeo';^o7Nn|(d6N{ ^`FuW`2L.4tn?|ɀˆ^)f}x96G~odQeWWe`K&dv.Y&lJyZ tKlIAM g]_5tdL\ҐLM,oV+'^X$<,վKS' #;]lڱ[u(WA;{7dHߎio=Ʃq悻H U8A3TKI@[Sc&d`Ғ H(TTjB1.1B(2>:RPmؚH-g՗o'&䡚  a)yce7]̂'Oʵ*سktO"FLC>?mx~REڐ7vVbɯ{W{pcw(dZbB^h#fԐ[ߴ U7R=fۻY8Ron&GQvx_ý٘:yڑsp,W2d HF k6"$LHGD!+4Ee:`D`9Mp6Pa?UViX{'s S_(52C a*tP E[-B:`@fըں5 4:_KQ DQk8IOK]ޕ57r#臙(uH]OFxc{bWYr2V}PM6y0xƚ8]<ş.7]iU|8G1Ow=eXdS%^=߈NC9 ^IÝQqmŝ4B[;=g: 7'^~Jwzgr\bA&9--הAjNsjNb i 1S՜Hͩ!cpۓi'&]G` '}n%sӵMB)L,rvY#q{c4Rҹ&Kkʹá58YYˋdUkpz =a}b_`#LZE:G8Q{]{Ѷֈ#< {wiݑ绣xi:4Y:P#=^m܎KLepp;̶̋nog@n8VvV`4a9'p<Y1@uwPP iJ;|p3 0;5L͔BC=cuh]KK5[\I|y;7 Ju(:\ z6I;?k#λsM jHI&l D. 9Ȉ\]O uKhknp@@jƳqEword3ιZv'~ՌL3O%ЭWclEe ݐkZ/H(.j&SyQO1%i3gK=VKiI 9_>)i^)C`xR駟'0޳kinYIdCrtF^ nƒySw rYm̈!gOv//QmMfEZu7|<*Qf9"=3jʄ&@2ڍVvd $9Z)s!T3HK h Ό-Cܱ PVkt-5 $e hό@JoX8:˩6*R]r!wW60 'EճV ܦ mk 2xv^{SA4EtBF;Rp|rr^|ʛu_PָXq(|.^Վԟ.y\mԋц[4&HUOWJ[{>!PsO0(-Yiyŋ*nP*^S|B A̰,F()-{Me I /Sl,STYd_;Xs3dqFJkZݍovVP7-VP?$c4լ{$fyyoKs߼]_a_[M#2ct22Y:ٚ ~VAyFJ@C@ 9*ʠ *XG+A.<,,o5v|FP-^M徥p05~T9UpteK' u(0s #`Qr9y'do\Od*J쏒GWBSBWkITTHQyP{d&׳zvn  ҡHb91R;hٷwV\(8^FU@ޫےQ'qVQ7`#SPB~KbA.)J,;+DR̯X$u8s cAuBAiqFk*(UmFr#I]a<)21L0%ZW=sWӛHyL?ՈRi#8j|>ZXGufqU_LncXX02b0j? sYKT\Kbx/EaR E)K,KR-_I:@Bo,KT, ɀ&я̕τ#Q_K4jwɈncX~.\u?OgMB=kmEo~3[4ܚoE,Brb*vU?ly^JUoM4Zo-=[QhAE6플lS^œL[@!E{{֤M3)].w5i9 ,PT@QE}ؐhjX+qت ,PҒB,R' €Zp *Eњq"!vAk% J輠'2HZaˣ9Z$Mm0'iFqnyt{7gޚԊhU!"Vqz+4%Pޫ;V]j1:rl<ԀJ"eבEm>U@ _E؄?͙"Os6!w?qqg;gO cǔ{^gKEgCWc@!-8;94g`δfΚfgE*ưEFZ<p-O0pN9v_BVcMvc?ܱ Pg1tIJlsiGZM9#:90@ML/1ر4(IAGJOo&% [1\ Uwx;jNdvvMc)j;%T9u'N\%j h֫,5e4Jq/8!D8#%p& HA}X+\w7Z3uIƏ׵C%ѺP/5-բ,Gj YD2oAbQqhd̕kNM% 9b:6ps`\A5B%uetF7&8p%تG2DE *(ncU/JH2z_pYxHN\Yrf.([L*̱dn=*Ǝ%e0+8bGX 13ߌ}JƠG9||~WH4QI_NUE="JjP}|Jzܹ/cSUP o_~-? fo]~+y))@ 8mWo(PQ)wq1 Mtag;B/~ E='73ma6oY831҂O8/ǻx6/rJiv VbD}|%n*877Mԃ\3.1~!k=SEF2σkޗ-'i$zjo\U[ToQxS&>bhwƤ0X>늖`F$"PY*xlz3t60NsgQ>,;f߯~lf??_Ծx171&Tc*`SœGc(B d sKy0i qG=adm s'}<wn< ᗿeG\p$aޜraxdCJ1l ;hyi-eֱIǒ\bs:'D5@"1d ``;@/''ˁvCJUABghJ57jB,M r"rWrr(0*3 Y<ԀMBL Z "c 46s0Fΐv`1Athzrj eSAcʚ &ū>A)P9!V BA-0:p!Uǂ`%CIXeC.fպK\=5Eb#ƛٸ[c?)=uA{z<'8 G/cri(-RTm>,DDk[ C>gaT*c{FXU4Jڰp qͲacIB+~i٦L_wN{@$E9( G|nMft6 V}._FC.`Lܖnt-O%.O]"$EYanO<%9Ty~~#S%)\0M!MXp7~ 3sğdgԸ_^_p w.-Hcqo#`Ǽ:~NF1hxkBBxDXp4#V5 qyp,Sζ|}:pYW{9G漫bSic w 5Xc[% ߼1sB*K}x(.*QQqK=Z::ё4xwM9PNӕ<{]4CnZϏz7d|7=tqu3'O0 ?XZT9GyIqXqX}YPX]869a1s2nFS(Rl:)Ei[oeX1Py\p!H* ^ASIybB@V=s/՗t=:%8Y,=H$mcq:RdBg,MhlR7Déԙ&6Z9۞~tsq@$Ur2`5tޑor%08j>S/pR!AbGܟ>̯YEj8O$TO%@ z 58N"Dz5p6XzN$%P}_Bʦb9| R }bLIݗدͺb\O\Y/&lGH.0c-c> P"%?Rø#^4RrvFJ"CroAK}l^#sEYaђ`y?l3;) RB f< 52RH>Akņ ,R6U=ΩĮ޾lsˍ ?:ƳŶUC$Lכ6]ዣtm,kcʝ2y9ŵEP>0s0L!dsk#K=aj~m Gr(!'=|/*9&tKj7ޙt 697KVk̚q@eesusΕ(JOǚԂo,|;?Ɠǝ8ʉfգ\WqI^J:q)L>=Ѵ)2z [i~ҕ`HMp6Ϙ"0FPpjH o|B z |V36?L^Tò֜ mkΥ cZ^‡ռD. g4>'H;/CZb)b0~??,,>2}^>pDk&9!6ۉF9SB&rM(oH1!M |[^hXDnI[eL3d`6n!Q}`ZD2ʷӈ?Q6"6JKnKYy< $XAXa1I͛JϪ6wNA 췐+S 3J5m̕Njg8<ʐ m rW{y2nkI9Q؂n*1t>/(Ӂ #*(1J ld"5ԁ Z#EIE%PADF&_eB˗3D 3,o:$k_`&yf4X7 r8ǹF܀l@:2<=VpP[.4M-5B)$ R0IJɉI179Ǟ2I t4Ӄb?R_7qyW1!+%ʀ7a' ctS;35o ٬(}6ڸ2W͈h/ } /1$t#c] =Ú0dwN@9* 8=pT\&,h=$+<) B`gʣ;;C@"hCJnӣpT?Xr Qnǟb49O@ #&*pFujpA[YQ@Q!(DYyLgU qV™3 $3G@ͅfH\HB:Gm!h2&̬c4ˉƛܲ@ֳz\,#{kq>7^2c9A*(fL斸{$gRSJ8Zv_+Aj7efSl6kgIhUߺ,aDCX'QHO1vO"DJ̐NV=S>Yi7:TZ \%tA& I>?Ϙ:Tf5ug)#alߕ[;k<\H\(EGa-/-$ .U6&3U"O[ fZwNB}~]R JT9/ZRhkwv8Inw9Ovg(.ߍJV4~5~:(t<N`AUkS]Hva{h(4*3װw>{5S!/jvmch1IYsr)9R D.sag q0BQI$WY#Ujp!nfb LkrP4Bw%aI(D trM2(^2_OCjǒz*G9ùqhN ӌ\KʃFjԎ*0Lfbp_.XI`;Xex"~X~F _ "fYFvζ1zw=3IJK6wE6&/91;܅,o#_:NF1hW F0k6-y!|ݽZE\\Q]YC+NHDb]?szfZg)hIE@5TZjѝ3hZDJUÇl&/0cgkƁ? wUK-՝lhpW )$hHo"RǼqQpv1d=ji=J.^50eX▻.݋ tZ/ Ig@@]c3[]BkJY̥+9$u@!țbi^KV ko R¼R:e:1̋Tl71oa.R_忦av9ux ||BT{k,1ZcM[6_oܽӜL1H춪V !#g'#8WFvCHБ[צ%. )TX_nsy /*?/f>/+%Z{sRwn\}ܵ|Y&vֻnbgbg@gF` |6Ls% ?fBorQ:BG4{\oп O&y>RlPEot2{a6_4s0 /28_~f\j*%*ܥ8kr:,Pܑ0Ǣf ֊] TLk5znybج˳6JlV@3AiORSk|M^'JxK,):/' YtwEdO3h-/(q %kr; G)I(o `%>A"=DZ :dy ls jkͶ,$IK ZrLH{C5 ,tޕ5q/"H$ YÞCSj[jEVugVf1Rt2  `,–L^CA늂>6%cY>~TZ}@-*L+(cC];( R 4ȠSBBuT/+7~Vݑ={ulލd?*v_ɘ]ݶɘcف1rFpU{?M*|3KB }YWQ&Uۇ6 %$g db"QAI)3. Ţ*{lְƲM6ӌY=|tGk2 B.*eIy ˒&ռzGcaXbA **4K@ 6`-NQy[iY )l&`bH i&f/X|[biC.(׸0̧Y7{+f;^tj7঳fo8mn -7N!Bƒe-|eg=*m5GL{v&G0iK}w0zs;|}݂Z"# "@̞֔nFW_>62,RRҐ&/4κknXh\]<,)~hn$VXB/(aBӏV6 9mv;Dz;Z '~V;P'؏,;{q?\WXT{2"dQJ BɼhD bt)9J$S>& r0R;#t(V&=LJ(iM5;5=՜ӛ`sݾpc}H2qc!IAݷv.r(ki7cmXPY_'zLnN.ҭNYU[<~16bxZ1eSͥ䍵t=Zqf}q m=;X׻VXmW_|չT, |%KEguYǁ~W[w>g2=\?}5=Ł)gx֏>zW0\HXs>CR:aТ]0pŗ;w틔mb}N+U"WS5xPFbCyJ})҉U`ܱX11vZ-^Rqu7 tc52h(B' >7;o زi9aiG괣i GSϫg:{2v_JAկ=yzp& oxV?ς| c[w'!+t?83©},'hĞ*PͺZY*"sNƻج&6nj30hڄ=GCȌвyC[Im/~$TJEy0FcxG9Hn?Y?xiKEmpAn~3+F h˵MG~~|RO痞E]T*J+4¤`A%U# yii!ddOVyL>%1@b! ,=rr8xvh?b3;Wt-"~ˊ:ɳK`&BKYi=92 brDuda53`>@WJGLڪ }Om︋$ځA\X45B5@KZ/*۪ohx-Wk!^HJ8j{wCY-8awWmRZqt!vY=9e m7{ '`y^IPB^im]e%iE/%Ĩ3>:샜h0/==ռNlNV&S1TceYhPrAO0FbwW9( }lbIeʧZ '^czV9s?~ ۦ/zCټj,T|g`4+nRF{nc-9jo r*Ҳ>h[=8 wöZ>Rnv4D`)p_}jփtmRV^;\Y}]WD{b:_aU(jN*(tlIhjL^Ȟ]ۏuty&%Jrl\TX"49 W+.ݤɋ6Oxq' ;I cOQnkëW+ESE/5O,_Wjo-|;r^dL529j5bH*:,3}ͷ([ n1TXZr.1 hpu@ ;p{UDlG<t3td ryd,wZw-#fdm [!SǴt9+x&įguˇr61!GigUML寗gCm']xj8Q`E`߮VUxve]X t}~0ޯűСNCdn㤐ʩct?"0N!|;\b鴦iҒc+ kHj4h48f,seV$BZko(]h(: Art t4-@ J,訅+?- ᷩoY9wԊ|ΐDJ"|Ѕה!%okdWh6,Uwh$P,⫯4AMmQvF74։n7h㪇:A 3]tMfaŵI vΩL lqPʶ^T HVeiYTl6ʘ^-z=^|jl:f~fK]^_}gҖNU{5R=sMl`D^ZF)猚 Wim0WM!5t$L"0+.S@يGa+ǹ\X,,`5T_nb K޶kP .d2)(!*=6 ^"&k.(j'̀ ?^^'vpN/,.8`Ъ r{ @@<уʚ C]Ex1_x)03I5sIjl*7jRk<Ĺ-ZT!V@lX'WإbLqH1*}5>:ҫҿJc^ 4eMӪ:ou ]:vN2;v(ft1&VB`V^Dؔ:Аʚ;c.؊$r eH]0dJo_֌)iXUaQ9m _b] GxW=a2(3fiZڜUF0|]O3D [4Bk|w`̄!;@A4׬?K556)K$މ1JB*&DQ8-ix OZNb$ky79rڨE#h>-H|kom=TB};BTٴ57gpG vc;D4O oudYO-ԡ-v@|VQȧB'}FBT>n,(p>yns\0l + D>|鵦P6RD I"zޓ;*K{`v cHm<lOC(a{+J> ;MLux H솄$~_V U/&7?gGksY1ZIf elqȰJ{ʣ Ӟ)0)T=t<8TӞCB! |:ƦҸRWp:I㖍ú\$ͩn`>Ь7<>OIvM|Ӛ6T`YSYsFX%LY}t*mSw NH`1K A44sbmҨ1"B T\#BX-5#JrJ-C(XQsKޥH1CF ?QGhF„ WXp})!</6~(&,4E냸"к|xmK$Z׻.TbsxV LiD$B# n`=D*ƑdU[9BG"`a`Dd9Xk "〰?Rl| Rʴ>c/vIzgl<`Clb,0Ob+0!8S?rʲ"y?\ets=^ߧ>8/ik3?s꫙WMJ;ѯr퇝X8#+XzBoNxIYfdcͅ|`pj:+yVKNvj7L΃w{/ՙ/)Z4w4ӿ\ \̋('>]R3ŷQ\'ۅ[jD:<ف D )&X0X;)8##_9ԁ(:x::QCEa cu{Xޔ=s|"tXuJhӝ,nr8R.04 6YtۍV9kC-#|RYz]M5BM/7ZyVʁ5ABh`oMpO-VYk9djӧ7_Ɠ_G78xrwC+@;FeOP\:DfJdM^#i XJR^(!ŖjlwH!HS ,RNP䘋^*N7\1 h Humss9fD` V(&"hn!L%5au+*Ia(oy-]ÛĢZER}#E`\j[DR[+0dn]P_MF+_Fetm:vᣙïٮ]ߋݫH{Ͱ_I34)G#.+WQK?vVeXz3Χd<-.M:#}ǡGFc<^v*$H"rDR%'$e4HcD5NzDF`o]JwXVhpݦ6kʏJb"ǼSY?sue۬ikm!eb#b^0 ƌBġRR q+%KGh3w@ e:c??˫a)*Fy$>䉈{a2uǠ%12-˼*q+5 P~_6ytW D,BvP1|]؆ga {'{S tY"wg)'D 2|X7->=7 zh}߈$T|ߑwo~Ry`VXӟ~_h8ˮ25+FBl~#~O9o(~ r?suo&x> X#[9i(J7S8)|2fCKz|RE ݇FT ց.}o؎%qmPi.#+ja`=qEP>oE9eZH-';=t/<[l7tPQ fˤTb)Ԩ?JqG"U ww<7ݥfTjv풝( 6XOܛOwǢw:<(]`ddBFFlJ\A_@łV V@؛)D W7?sXӯo;{a|L5Gu1~z+(Eka)_C@*<({|sBS#m1HWdsѮSى`+70:#{^'&bK.hbXx DBb)07XF"Df!dQ . DNjBwt6r!ګ=MR |tYx9b $B`CQ=#߮YY:zL;ng.D@/B o=m#oiM*t|P5;A##Ƃ=q;\iFBB]s4wl#=A H}%k\az0p_Tѿ`_^J(ۊ0j#җ=ai_ P(|hj^lG(:ypR-~xIk6RhIK|nNYB'(JS_Vm._(Odlq٫=|Dq` F%2xPØ\U.ipeE)?O6IƕڮOŢ!3?x&V~R[;VryeњtL.5Ph-]ԷĪđ( %-E (wOy'#*,fX!9 IIAch\6co7kR%)5Dmlj&}Lg2PBc Qx\(IF eRЊy+HsGsg0^hLp*1NDAh^]mof|,/!vFn)8 %9nO#F ,b{0qS)HNr38D-=elXh5Px/Y=>ҢAG>uQ*nDtsi0f1sJGuZiQ qܫA^@,)0b[ !H:F/.H@qu tw R|"<҃`qJ@w4 I|L\G|+*YnILz7 Ԓz7A~&o|+c 5t ʆxV¯P!. M-7\%Q19e@ټqi"HA)Vco'襉N+\}%~8@ MU k+<D@"!T 'Rƣ~ǚJ̒-F=Te`f5Bpɜ'NV2r,>k?}wщL\zi;jX I+ _ dIS~ñ+)g .;5|˕CTZa=rފ `> V`Mn钜J ƑP%)qR.iy\)Ւ GTQ7tVV`F8RDAy(ͅ HR.1J{׾ F].UfLrҪ^JG( £ IXҧ9A²='- }F[<蒗b1@~tGI*TN ܞ9YcKR`:}Q &I<%AeQÒ# /kڸ;v?:iA?$=v,6:V|(I w3_]v=>̣4=-Oۏ!G68ȡGfzhaH{ N:KNJ |ks~IY*ĺ2h^C/emEICnz-XRӒӂF 2Y":4T CDO5DPy`!.5CqMHFޕq$ST!@ ,Vó3/kSjEMRn6"dQ_Dő$hԞM!TRLi͸3ä;宰ٰ2v1zĮ<|NէF.{?m󲵑z% m;ϓ-xeҩ >)&G针6 8OQ[[ մv4s&!| ,zcgB@0w@U)BuF1g<=g&Z7" Δ\a}MYDex0ŐD@=r3'ۨX>ku3r^'m敒2U$$Ѱ;)JWOqă #bp4#3As4yRA%-F."b} ۖBK=]b @M/YQz|_Zl!/%~keO5V?QmRĒ@nt@=,T,4,qvm \p@ LX=,O]~Q!SnyI+.%S yK+[3]\*=p+K*Pc'7XPˢˢˢ˺j#&b6YyZz=%2cNdbä`<r'Ů74B͛ǷaG\LYڷ)D[e8h<Z@˿aA;' O&zrQ:|6am zfʞh#NlM}ig+PՆ MMoZZPs1jXZ'@`h z/r8WniptQ!+-G*d)U傒[Bd'}p x*2ϬPH6 #1g { ?oB&mX _)D ( K$pJ%CqrF 杛"%"bAGQ4L $Cu)WQrQ"6 ,RL[gsɥJ~o2O', j<3 oBW& S̢ _Z!m. k (b܎bxM\7FH D6BJ~$cJ0hl E)eMF50Bɔ5Fu{Q@mmk#|wE)6JX5BVVs".' tskxVh|M&Lĉ2>xѮŎi јؾ+>lmB*-9ōΦv@S_Wʒ+e>:\hT @U$?>[ᅧ|3t{uU~(ɹtOSw5 &T Rq.87T˹N?i4P_j5B_;,kRV4)bx1-Nw^Co ZȀ7hм՛7 /'r8V>~BJ!qྴr,6pnY^_}fCOZ޵+m9^j=[3Xklrmؐ؜ ?{?&3N3H+~N/jL_u0JYjDidVǐ랫.4g F;K/;t6~<܉ u(5WjKV~J% ԨZ_SFbOX/qd rZ d//~uX CiWkAVuuCRmwN|ιp" maiD+[Ż qyD}I.Qrc g6hWNޠuA;m4n߶hC%H=ھ{fBZ Lxםabe7 e7OZT Z] U]~rJFl{3o>Z |>?po˔Ը ̨2*RZ+~\U!a*\x*͐,2ׯݼވ/y />ғ_nDhIWtCz)EِN/we?]0B*?u2 ح%7)?=HnJnUe^s睺[}kya%oh7tٞ7@r3Yzie\%%uvНƢ\E9e^"W B^xdv4_.LӁJXU6ndX%Cs YQE*7#S%n5!9¹g(8o%ȩ$ K6 `)k=@ɢ8pxֆ@I ;2RbOvnzVxYxo:ahHɪa|ΖQCq6r? "{,ΖXSS &ԭp8TԱLIOؑ{O;eg`An>I|<Ɠ0 _4Q"4y;?6IL~j| I5قŝ-tCw߾Iזr?\g]ӽW~@֗N\o4cE6JP^xJZTE(0Q[V9lφ[xaVs=V\^o$Pn[ڹW։nvWJ;in ;q jCI)"EV@bm;A=Xfe$$k;,v&עzڟ+b[{gsV~|\ka#5׸!s UmcuhΚMgR{֖B=GUu~Pb-tJlt6%nc:PYK~\<ߪMǎC7 1{\h(O@ V=}tm=xd dgkpyx5s.]sr؀5zpXGOh6qZz@&,mYL|c7דG%,=ޥ:=>$*mgT>$ DgF\=$pu}Hp,W3Vu{8p@SZ Z_ݚKm8tDj #9棄Tx`*o]T(y G+PO@r6-:xG  iT _IxthgXx: Vu3:i%XESi Y0_a h!áA23a E%F'gR @ ,6]d2JfQ bNuVZ-I2$Wo_7ޮP=t~NmjEJA+(Z+Q[kU!Qzbvc%ѳ &zbB3~huV'yxdFn@A`(vxwfA32cDfR"vt@DC1#tM^COF2|*!TIMk.~. m< M^!! dz{{?|8I^6O˧ݾ^ _O?V>ޥLpmKE?? %U,eI. [y7kR{jf+= #?qd!_#겸.룱 }tu.G|NJT",e"?jGmỾ]],m ʂ׃{]io9+?̻Pۼę`3If"l6D4l'ko[GZCm[Y|X"Ymi'ȯC\wZח_Ju2Tw 8Ϧogv_O* B'`<יf:Fe 02Cf*;vfY+f?vۃ5 Tvkeڝ?$.ʏv@ypdΐ %#T3CȤI~Ra)|5,{05:;T7^F@k'o4r?(9ǧS I~qC_bBk^)mT2T)sDWJoa7 6CT~~|u)e(aN kᆝ_NtwL#"@Ah)DC_P)Vd4SsѠJe0 5%ǃ@j]{ŀ{S>^0l 5XWb I‹ԇrd GrbC)J}V2[-ߐ"J]{iߛt %%!)?94=[y L!Cs&㞁*+38E*4 q, Q`B(,y,Wय़rV?90/qe?y+92UzA1O^cuSG^c?&K7O|N0Uwc|~8B+OM1ݒZܚ$IPJ)#'o`}oZLT9X?yfΑdV*Z,?yT1& ڦSZn2پ {o٫(=5;pC`vt4X?a;zv{syh'@NN^w wnc׳~|ic2DuV*)Hbq4HF1b f@DK1ܤ HVļK5[ +هKSz6q`hZLqćDIw>LM|8Nyt?;p&bQV} ƺS@Yxnqc|a;PKM%+qCw߂t;eYS8AT RՖ8WJ턻BJӡ9xEe |!OEw ?qBiR'lM3D%+O6:9Fx 8dޏboO=Y>{rcD09xPZnj.dФ1)IJJ0턧;PXe(kB4z`Hg)BgBEX0@Kj8֖q%.Yݛ; = ]K@o=%49Ҩ,=vj0Da>*3F45 O.7elٜXG+Ŷ;~Cgerq`AO9%7,dm`S$ qMHnN&Ǵ}מX95x2k(-7Neه~X+U&!b1Պ>VL L\!pG;pcuDT|<™5&Ik*JPs@Kc{WCqwX7mWQYPaaq.V+UO:ϯ38z! No n50Hm 9 ̹&"MoRuj$zFmu+A$m~#h4VBMm,o"b!~p]J%Kǣ0O93<ֻ&%H"mٔ1 \ ) ^oOλח^պ~^u?L`(fXZz#֗ɸ3TJ\F USr#/khyP*v\'SQңQwO0}$3& ˜ȬZpD5<erД0*Ğe{#V β fWxny[] Ⱦ5Ekgf_$ê.汳[Ebk}?ruh.l[Vڨߖծܖ [ymB4/_}n\o<Ϳg|}J(b@}3x>fiSJ[R8#Ffq#!ceB+5לDc 9^6A# $ ؟pW7z)Q*sD(&BQk1q:̔hF-S6q"-Jx!fnOeN ֘I/o$n˥ .^揃(5gz8oɜ eVݳYϯ_̥K3Bt9:0= Ϥ|izGFc7+:Ѹ=+1i>_u`Rqyݓ4G:tm1BF+ ʓ֟f!͛LOua<O3Ux`-]h%sj8 OGaz>?g?x>gauaj8#ſ[|Z!hiB6.jw^'E=oÿh aw[.Ӆ3tW0#^w:wG0F_tMwd>yqMGlrh䷿{wʍ?M><廴lߡ#'jz 3<@ ?%*4۾^]k3ooQl=jae^NЍ>Q~zh^uz @xb5)2\3bl%ϙ_Ĝ";WH:w;[y|zq0?-{a&t0}{uåG70sץuwSv|~Oa.g)ĒwY/GtӡT&uIq ?^Կe)r@3`|xf}oNtw yw RL m2hV K-5>%Ehﴆ}2rIG&`juz:a><00HZK8 % I?ތWVdgs65i Ғʹ! |TEb@S¤ \1 ʴF_:]ɗ%,T(&F@ BJm(~4*0[i5R14FT)(q@ bB fD颵17>EV8#~K||-M={Ws\z2ҙf*ͷ-Hx+۶ICdG*p TSyrEk&L,3*3ʗv+0nc?+[>oP]w۝1һGķ/Ӭ]/Sh P6] H<%a>=!a,ThLbV02qn,a!GH*Bc%XEx.]xFQeBS#9^*Jv y9m d]8Qc}V}01Ď Z i#SDJr2!%!1e}lgnÈ#hIVX[ZGQ@ hVFZ7[\w1O~t,ʣlcTq48ϑY(q›BVu1GaHUW~Ė<2 EkJ߅V ȼy r,WM<7}DZԸ>nVHSn*HnioZ7ǥreAPF[A7?*JhHFQ(4T: .Jap!XbLq,ԀG:k,y Kuo옪a'@*vJA 쾶_)|EJ1apbJ3WR]o*Lܙҵ/(+3\ UG?i݉dqRG?i Gs L9p^dgnk_]O8ʑ/'~Guj؁>q4!7X0Iį)0bJSB +*H-m1QTS0+/'riIz2ۃIzRp$9,TF LA#UJX;hxGupxd oϳ~M*dDX DC*)HbÔ6F (F T?Ib59Vjt^r^ TJYG-% c{]X(V;ӖگyeT6y,!}+C8seLf(Ɂ㍚l}*lpɨ(3:3 ļOSkr[*{Dx|QyFu3[HTC$n9Kep0yDSZ¬$H;UW3d>3*bP,i)A e\U |Lpn1( Zcb %iVuN+knH/wdw Jv(f,)D{a`$V  ;Yq tEJ!@:˪<*2])fA)™5,`Kj s&ww7ƿD}HaOrɟN v}j} }؏(e4E2XkhxKS$Q嗭C>)}q~51IWȝ Zeٰe2dB/I (Xd0kэpX 7hLxZ(2$ c- %8XAg!"2Mw_q<ϋq$kwbʺH0Āo4WePˋg&]XNZXs36aE6Ds:Ioп+~Näa̖\ |.[j[m{>yy3jO]CPSMˇ_0-m.Q޽|EEP3acX@ǂۜzB c h$fuV ΂1D`2噡Vgsj9W)-=6`1sAW5&T@Y;93ۨS$!A[H59S됩ƼCRfĮCV JCP!f!4 >s u"3-#Ê"W$T30⼹B[*tP:YYsWm0r1K^.BJK;pI@@Ky[k_LD槯 ڟ5BZUKi,ש' ZGy*2@>dV5]1in )V028 ˒)jbou"zS/_4zi^]zCoݼTa+2Gtj2S N"R r"Щb K$a + C0w A3͜՜}-䘒:V8K < iULh YA*A8TA#=sςs ($q.܊6Cڻ@g&^-F5 Dm^ۊNxp%: :#S֏R* H+@_xb,d7xH &܂oAV,mВ\ױYIk-ڑ: p,2TJh  L"nqCuP#QOI/ڕA3{)m<5-OXlt+pr_fuh,W:z:`kƘOIk%2) arMJhL -6g0$uAJ}-ܝfD+{xDAFopJ e_Ѫ_V@q_K8TpD5p^'ϊ^,$W1]8q|gWEƻ_L?gc9tNc|D2 H_tRj$n󤎙.um%5हD5vKq{,.@,/" v5|E>N;;G.m/E0&DkR&N<(E }Okު{g>;6,֊%9*qc#?4eG;0Rʴ3az}x^sA mZ.2/,v_/g3A\=!,a6@N$̳0ihuQ$*a .V[}egi"P)[3H? P՜ϦPYz?BO|T1Kg*W)y?ٻUX[!AgB# l cGR䉇quI2_wJb>vQ%:a0/(*e|B'h0-/y/'8w4Tx6oen\()1<@7c!5Ie*6fF XT u_n?O-4dHjH)peIFM7_YS?]o@SE$kH!ų\R*Jw*REɱςP yv&s31kߺY$zc:oΫ$KKN'qfv0k@ih^-{Ж-5 |XQUBuIc*0٢C)"]?4ʞ;YW7(*' f0pүӨXqfQ7iVz=al*}(D?$ PeswҐ!H1:~u?1_&يǸVkY]~vgɻaĭq??Mg-kqK̶ȭysk#pKrέ&Fέo}p.'!PiYJ'ldl4(&H-ͼZiyxph c7t>1|U8Qt{0!2|jw@>'ѻm}?۾Op$&=""V765Mp`GX g;nL?"0{O{;?LR<`(op gB"ژ` J%34Ö 2q) +eCB\,gFT1LF#%I';yoa2e$Ya<,9}Bx|7kۙjg X~؝,J+SpwS&Bw!/-IAiCReF9%CfStxJk97}&] ,HVS' P;*.w3GuW1U 8yFh)ݵ(לmh0.'ϻË'38sލϑhkJөљM3TFCìz˰;J^ n׷]p&ͻ/o޼W.gX/|vĂه7x//zwH2ZI ]Zɫ߾r:"{4ΨG~N&hzRLWJ K5|x<'iUm/O*FmAV$/޽^|w\b"w |Ak2:m{mr:ky19+h hsާnZ#nt7 cY|R}hvYӉS{7`A;2~,ttAiӱ J \tS@a=&U"(B2 )[8YF['_܅袛jKFӲX4|VY[=kgmV{8Rd7Py~ jGJ|.#ʿGt=P9/ⷾGlh|;+J),餈V {OJ5W5|a'gGnbtVaj|oXcLNތGEv $\2'A=k0r<7| qxEɳw/S>NS5GyER#pܝkWm^}Mg-A$XD3]Yo#G+D %@?,]`pc쁟 !n%-R 7xexX$_dƕq}!\HibѺ#8R `@J;zj-2ꂾDWGx^D_b4Ƶ=ݍ9rcy|ٸ]oㇹl/zE[{gdfwfo=N0lkѯ  %樏iraLjPl ܦ#~k,qL7;y]% 6yMZ0f:-f3s5}u3mQ+WӰ.`>^RBch/mǡPzְn\:\5`=B{  hɝv@O/RV5x@.s!\C5<:df]0^L _+&ӔTc_:|~ WwWZn"Fz'Z-gw%(xuH9D6!d#Q2dTfXe%q\4:{^lL۝ye:{fLl޹F9IzDDX +IttL`,j,"I7`:-3i%g&ͬ&2!!L,BHý&d䛇aq3 %g1ۥw>Կ?Z^qVe~ʞ2(,aHi }ܲłs*&woj󸽟~[uۑf>)e ;'Y79xM|ޔ$y&NaoooWo w hJuI\_-l[[ :F`^$4\wa:h5/wˑ1V6L*2JVXd JnSz (ȇI2nRky_(QoJ>ܸE+{-Z+0sy58bI#g_#V`֒m%ӪG +Nb/2x<  &Qw:'(g_>xᶠu+:xxj̽_m49QmZXiy7S0v'c,mʐj<9Ǔ( *Q;!Hf]5z3<\OQ99sKyR$Z`$#s#o|}Նt rO2lUY1L9$mo[LĠ"(^.[h9܅ji Ǣݳ?sΦ 8 ^`#6eFFF;F3aчl2xȧDZi![j?+>DZ<0 Bh$GB̤չ4KiHZ Fi  $Ne\ ѫCs0WSI"9(04{{_~}{6 ~}-¢o@dȓ x# Z#N\zeg3K^! sJDiu$7Ԣwhtj9-ˬ2)du X眝VP' \F 6iHsld. OY@2ė[E2$4b&c6rDdXrnj\4Y-I 7@F( -9 tiC#ȇM'5]4:3.qrzG{Lyݣxlٌl/?N\+%w$d·uN  `*ƫf禿2g}3Ww|Ư&w@h?=yWO^h!$쵛L6]^+-V˥ke|3C֪THPib=yx9֔4/n1~Y7m%<}K rVl3q mRFك#BX{|DOl0nC4|6f¯lsó[I 4*g3/os5wo?4fy|YaCL#}n]'GvRȘmO?Jq(~^OZoLϪO?ȟ4 >>VԥbfOS[S9Wqla⨎g =n_  s3 pᾯLin#u (ek8Z p.pav&f n!gOs37avLm9tB%G;l=_?K.x1tD8nAV"+.YN;m$s>DT%fDဥ!Ƚq2렑YE+zDL1D8YF6'ktxuT]LPQ,Q똕Ty&{I:z(IgLB Q0CD=He1<\%ONh (Cf-hV6M%-5Gav>Ү [m:C>G!!f7ˍe5U`]]FxCy-mhU^{;Z-)%FsQV/kW5 %vԴ6K 7 /詈Jt<%[W_>.`-hD2`6Ҿ31Prld'F e_mHLHF798.n,S/B')$ikBN)CI=LMH6BU%ZHu:*O7"Z;h^[ 8@^!cÞ*^hm|6͑  /_Gm HΤZovl p}O$qO70B#0#>\uw #gI}4i {7{)I\ID{*lA.$.޷('bǖJ c뮽Ǻy'{NÅg<}M:W<{(tj_Ħ*OIIm&39du&CjɟAX{=sEz=zޓVحKy+cD4g- Tb-]E0V|rfEQ{NDa0˥7E)k=T3̍Ls =N\ N9G$!9rG^])wB{Bf%*/T0[{CZst `Tɣ֛nԌB؃J?GCvԙEgd}OF@.t[Ƭk"М„Bl Q aVn4aExƛcg˩ in xa-٘[I89 \o{R#9H\ 4kZp$T1bNUVX_v9*;\ac_xv[]0q4+Ϛg2S+X~,ůU[RK=+UT3P0ꠔƒplƫFA)h" 'D/nEչsT |ݶâE<թ⤽:iDv?Rh;4FCyhBIĺ^ .7ky.Y_p%=\KvL$g';BBL<ߞwH1Fo1JߨmH <:p-O,HCAV8\4qV+'zdygkdl$ feIf˄V(ӿ'lf慁P&HJsgN @;f@rȓA#7u؁m@!p5{@$Yy 4HҺٯƍx+o\%uPCQzF35뫐~|LS&]>]JE?&2^։u"exLd@"[+yODQ<5Ԙ72)ҩTjS}~vv6..ʐz;a>G٨dQ=LLgM?*v~{X}ٸ2F?]0 ࢿ1z0" ;æ`Q,BW d& 7Y&I3#H3iZ/0'ktޣ^*L'%~ =S`F+3}C!/s,l-d|g,/H]$:Qhr4h3 ?uEm<]c) <deYW>s#e0Ȝ9%rrnfh7;& LY1| C ;ي$ΓJlR}"%?d'p{n9"mQmIS#4$E2`#*!") HF+$vz,FQ6,hiu"}yֆ C&4M͜CCAaoX/,'gY%"=:et@ >'bdM`"c6:.xB2d3MݏAѽ}9|{Y z]. R֝EJm,pK2'Gwﻹ+kWYu01[קtXUx6Wl4dZ܀$܍ Gy/0NԜ'k@,jοCt>_F5uo_N&@hD\ĝQyZ省| }rfM9y7S)_M6_u|Aa~raa>U=]bɼ췃,t O1vPQlRESiυ}Zri.S-bUۓ faO?2 \c3|) _gn]YH$s1W@a3-&操j.q YBfe{Ct{ ɍ99a3%sDns(JuBbu,_.(vmqSeI0pXeg\x:xtD&_\š d+͙? @ryZ$P<]K&O5$#Ն6%1=T S q, c?4 z6%Gz'sGY0gv@o S Xeu3~?Mrм"p&"hP>zQ!ll ee:݃[-[E|Ç[?k+ b֊n($wVNO"?Q!EfҾ>* 3{T>~1+yDP'Qd`M]톅vpZ2| " ]\:GP/fYؤ,\D8UgW75үfYNF}gs\Fs% pB@p$$;'azK^Gj%_UbHc=bl cOcWP+3X!Qs'_ \{"x"j>VX1S7"Z`3"_CqeTt76*X:+kB`ҫU`]jt OjUeYKJ&<˨57)/jݪckYI6];y_O mG8*w5zFs33um7,5 c1ߙǣd}՛)0ǧL4hts2+ nJ9_=S2˱r %겦HNhjӎt4wYtsf鸎QD՗yduͽɔtB.|:)I$k|޶t3P+6Ku"ҭtLSUts`)| _Nk 3j:eAsY>͛` 5Ƹ,/WJ*  ].-KrXZ%{=Uܠh]i7fzT;m0gp5olB18.4$2Dž{EKDžrKiMhK 5sB;XV@/^<$aW%1 %^j_/=Fm8/WGZ٣|kUeɜ3j"Wʪ2afz3arD`nF9' zV7 ,z3 F)>o:-l.7&@rՍqucx,zMuō!5yWNлcrlםle!!@ԫu5w@ޒk!yF @`K•&mW;R qLUr{)~h /whQ u/u^,0&r$q[lF b -EkSUo!܈[.'jC` eWc p}E@z*M:oQio؛>wJSzoY؆Fh*T}kmZ26wk&b(T\Z>>TQJ%0F+_S2}j/l?1iUTh{-4Q Ι?{zc5zz%ZXhx,w_5K2 5e7 }vx%-m$!HL(%b$C *! P$H`y>`\gP9hi[sV\:0FEE,wFk9BnALYQ8[H?JFu#)Lxr-,5pgLY׀ 2Jju!3NUlKl9!;jXA[p("3rENhܠ$ BhhBT!N˝6>j̋z{Mߊ?h3hlyN˭5 #Lso& $GW6fwp5.BM- Qozb 4k^i5-BBߚh\ڸY0'-})x7Zt`3JF\!7&)n$aR+\mB)c)g19=G 1!ґ.^XY߈oI:%pvGm!eMmYMy0C0*R.W+\ WHO6PE٩tz悔%0M](&\vsq27hNԫ4()靜JWc̒K)qJؓ(ځw1$'j"tLm5s6*~oȱi^iS#G*)OPXJ%S(e")\IR."b4caJb)J@<iUUD(BTFrbF QBJT CTBNQP<0 *(\D8+'c81R0B  U4q .1(%;h)g %R-Y{Os@vA3TW-AYdb] W7cqEg;`>\h *KCz[0{M c4rD۷`nH6O- e{ o(h@F~ Âss7AsQ.Bc]w?ݟv's&T?:HqPT1E$|Ӹ$4T[36܅Yn4@/ WE"PKv O"Zn_fa12lb>1uJfY)`;y*άp͡)n< _??$Z8M j.;q0ЅJ>hGyn`Jpc͌lM bڱ;|ު%Vr):sU!;d4"v ֤vv( 7 U9P+ uWGB&8"*|D/Q\ĄUYf|IJ˅yA!`bMCyj̇+*$nшXE}+PL I@BFH8@BHwo#ax@#$崩 'HkD  I€`m@0$, ې-8OkY!GV 6t2v xm9Tcş;`XI6H6 AKn/&-P^%sZ|Ct9ahla 3GK?I?g,8håcQEҧAߞQVЁ1zQDIx9^¦8=LtxˆU 5m?\`U=FTP^W ?8:;.fT@ԾEi僯EWRBo 3Juq&, )ӦT C(Wss!BD.&m[Y?_49fNZ{g4 OdI% )z)Qi:X$A`,v DpVR7`[ A?q-Q%1bUͅ}S !ʿJc\ b~vΆ1 4]b; }8ȀqG-i8 ) K_Ϫ*~gqwtv x4Í쨈n0j,_VR ]w1k>PC ha2C@YKP)_Sz4/1)t@#aλ6᠓}) 6dp9;i݇V7ZE-hSΫWM@X68Pֺ M'?j={LRWj@Ahl+j'q`dvmro.Ʌ< x1N !_# 8jbp 'a12|L{`LL?P^e2 Ǐg5@f͵ <S)\ZdIiH*Bk-҄%NY ];JKug'V 㒚{]~ oau;ݞpɕbzwTra>4ߓ/7 S mFL7!G[yNݹIdԁ)vJ$+үcMOC%#CÈ@!!CkJ(Q8܅.6q,>vu y9ٞ $oLOkW3˼Mh 4˛FNv8<;̿<ɦ*?i7m> "(i~IxxxܔS75^ݼL9L 2p`{80*ނȀmE: cIf~d*u)F%&jS~cli&@Mvl.[qy?y?gh{4BD.*o>XȞ:UąSW7E<|54sf(is&@rDq N*{BYh[Rv_1֊KkY!#h>s$."D [ OATQ\sm^U9VEF+BEZrmϥ\y<^̩NH.f~t&b ;/~o|wwO։_|ϯ>͒{!x'bÝZ=ZubLu4p);/k;Jt ⎎ϹJx[/y?miF^+W.xxŻ-;2e`4_A>wd w6 >V?! w:v6ojWS5h?OfҴ=~'Cㅧ_&whm1ty7L n|[n/PZ=/Zwg.{Ƕ =͊Vin Ho毴J֕6û8+kmKASc΂xɽґ,A]Էn/;.C34\3A.Xd't/?a^~;7a)4c| U7Aml!;]Qz8kM7zo_h޷Fg;o\\ylqTSyn [<QMF hW__0}:} n/'0{9oˁ){8J?e8kȘ2Kӑ͕n0m"5a7\h~ǃIɘi"8sI?,z)3} 6Uq`T6_יw9c|gBo<ƃPIP2(I$ۛ-e)䖥_ `iK <`qD(€[G-mk7valff_ cS7`a~v" k+\l6`WY67>d\B$[x&\y>xp-=sq܊Ch"$Agm WԳ=Fk4[{y[s}q:v aFxA霊ѥs^By5|#w?~Sw&9jDUE#'CJPsPd!R+dѤ!f\Hο톭٧$G~n6a"ЀMO)pD,ʀ_P#ADqH7͊:0,,krXF!Iꬌ,WF[b-cI0$8$j4YlZ0aYr4?= &Wi,J^5kZ@#${ĉ8f4`(e,f$:䈇acР62:!"QlAu h &0Nӱ3;?l,ݻ; Υ[}i_ t?^6B=O_VweQ\g +ϫ" f6{tDLP:(<\xXigaLhqS!.>?q.cRq6L̗\9_>1YEuW%`L٫) )/Qk N"D" DYL8ꄊ* 58z#%Q*\ 1қp FXm5fzc>wUKtÁ0Sd=S\;tI[spMZ]\3CqVF-XsY;cT@x[ݿO\ ^c+?fsYwrO#Yp7sv cU7Pޏ[=BL%״qsP0 ;p qH"r9aaZ ^qs'ǻZ7"h6ۤ7+F^OI=llQ?;-=g3HhRwl[G$>?{ԏ?6G+q0Lw4 ;6*BeXxc? 5dP96\%U3&o FA aWỳRlnQ+# % [;2Ӱ6I~: |j/V~ kqYMRcD}[{xG.`ۓ3Up+UDJ"sB B .H6!LCdQc%N jBUQy;P-~H6 XbŎ:ytjg.ʂ0#l@-#CHJX ?t ]%5Õ cfhd0w* +m1 0ʑOM"Y(AȎu&teq^V!Ax蘉(\%@/Sj1"g#cCVsrւ8..fPjOyoh&L`ycPa^`␭ ºsF"+L90\ǘ[&CklCgcj0  -$~%p{](zORdJ&|z7o]jYJ)gm(#P F, p0"l(\8k Ve[аdI&`J9YNQ\&/,(z۫-?J k2oy:ɫׯŐ%xI`\s0L ]ȶaʅ;sjj+J|/{sgf1_{a _}`irۻ a}w ~YZl(%ߒ  VR^Jkq~@؎eVHJH6jqB2rQzJ y2/(g+x1cE.{Lj=d'oJnQ}ɕZd"dm|m䘖5<;kMqȁ .LM7l0OFڏq`FI!x9Ӕ4@ ZlߴQg}G@x?θ^>$=u۸IHnbzn~e gʚ8_Ai3,Ekk;>bdʫS˜"(Y7AF$(H4<*32|G `T;ݘƂ[\Gc) #wϨkt@=M,D3bkРVv<!͊Cp!uH+CY!Jҫf3 {Wy!P_Sx{ҥ]MPF'Sk?3yPw:\n],׋ogJG)ln~I" ֯1̘>j&nmh`.{mmHo|r/乭 S{gݗ>wZUۻ5'/bq6DLgxJk8UyH۠ܘ&l `lƼ-w{3r-L-Bx#wqq s0] y߲z/V2PAYR6b3 󱤕 FQASTGFf O&i*eF+& d=h[ݖv.d[H^[=Kw'Q~n78oWxpsTSAj OY'e|^+;cF2Qq"s^V$#⢩;Гs+}~o63\rU-hƘē-6v:$Jiŗ<7xaF.;4r:Cg j# bRV!0#s(F%2!7[c:*s:l4pz-챖Q6];%UQz* nlhʅ5F$3D2)P… [b~ [ ՚cJ3NnhC9cJ o$Cp"cb\D م4kZY|Jn%ӛ >~y1lQ]/ ]Qb˿2Y ry կU7!̟:}A"t|]AG&gpۋ3F<[Ksu?n//LcBz"R>ڹs=8n2%\jA3 G4GQ>֦9^ߞǷ6.ئAv{83~}-,; K&K\w.,+hi1|"+=7UTqA筧mW=QG`z0 H!9FePιOT{ \9%8qaPLN *BSf#X/R(O7δ">TLcgv9ǺP růџZ﫵ťum?#|M ~~y 6>]a~5J\.迶7/yV _):*2([L97< ׃w P լ:~•<'9)Ociiz8oG[bX׃ZP71o\:wgЧpy Q& %>Ѿ 2KCb^d ࠌWxS 9A%"w:2nT-[hhX(Z+S+ThDcݏMEͻ U6?pyͪoxO翿k'v>[^>OV?LڼHzll֢֜A.3 Y `zcؙ+ex k3ͽp2K QԪIEYp><c>cSbjhei>3T_CnLsAVVE.ws?nTOWe(3?ۋP- }׭1x}V?qiWh盇+t6:m5/o/){^,j7s1pk4]ۛ#W>o=/]']s{5V.Υu$!_)s }#E-S^~i]~mSCΐ@XX%0C(kHd#_AbQ8* t@E9pWS4IDTDYo) +#:h="O d_t%BBrm!S>\t_^\ܦrU[TP1զ1u[:FksdZ`b{MFA'LPjriXHƖ92J0kRYK,-Rs#FW$XQZ VE.Fdޫ;ZZ~7kYcz?lMT źʎ!%W8+t-HK3Q3\ 0'ƍt6}< ìdFOEL&pmt-wv؊'${{i.%fiڋy.&Q~ҰC;:;ZH' 's(߼L/0W/1쵳{~ČZBPhO?KOu~p)N@I_5OЬ# 8]T眫P軼>CjottD_䃯\.ߓ{lmJAv>a6EoA3NsFG]ĉ^ʉ% =%5N  |&)Y#8M<0 -|+o|fXO3 |v"2#d)Z W|1kq B{ Tg&I]?6rl=g&Ha5D7}&VVsVzZCcɡg Ye81İxJ1h3V 6]PQĤ1$Pw*bHc0`Zf=ÊTju"3B=:qcLce"6џ›`ǎS,Sz-x 8.+/_NQJsvpeTO8$5 SπųSuhx_OEp "%S$ל JЮp2;6΅/? NPB5$.4Ш` *AsBMbZ0B4 Sًg)weV{%GՎuÏSҨ^nLWp&,;>'/w^}.r#ѝՒf *>pg LąVXtT u,@1pɡȒgA$OG6DOR3,Fy46 Q)cY qhF1&6dDNZ vEE:"8#UE@GMҔ2NhqF*'2U e6+T^hъeǨ_¨…bSL͊VX ͛ PBӄE:a9B`WWR8j.N% q):K|b fP *́gLЄ0cQ%vݠI\k%Q W&dg݌qi :4MyU 1@ qG2::7tsbybr:5Ѻbׇ\ s3p)2"͂,h}*|dpV!Źqt !Gh, Hnd!TڈAĢ#EvR"3fF`պ ܜuZ#8愝GMӴXtp vP; ]r.4rD*M.nցrl00Cejխ5T + qql͇G3rychvMyY%7}}Vg5wcgwS9z ~_4d6FwOXv^_eoO% eK^1򩚑Όu]9oFynnٌ5-YNMӊ]&a5T"];΄߫bE[Ъ6I{܊B(c0jQ ՠi~¥IA DL AloÏ;!AV * 0@8FeVN=DQhEc{o$6-HoVۛF22$w<ޖQC}=5R<3"3#xt7Uz#& PtTYfDprSClCXdHtA:&pAPk.,g jZ+ R+ɕln-AjnlhݼJև[T8LEp9ɴ[ydMއy[ur^9Vϟ= <@Tf'%O10p x\<KZ~1>z_: ̊vle4JшǧF۹ZxlvnT[Yյ Gb.4wj=Q3͎{S]x:8zϧGP'x4-u2F1"KF1eV}@87VYUPtxc<ג[v\oUss?QrFhi|^@.ЦQ8CyZD׷ل]ίo?]<% &Ƴy 0m|\\F?_7Ű?y3̐N*Zr`lv`/eKX)0rAyHZU;u!TzLvHGcgIG/< ~ILtaB;SM ӧ='3i"bBԎ7+:6@cú6 P|Llf^V:_gSMTDm+'#1 'lME(Gil0G6BCOkqH9Yr/>XN'ƀlv:HIK栗h9j^EydhӬ림-O>2r{v3L3L#>`<:0Zz'uŒ ueƑzm;CZ]QAxiTkc~v^cGWxu<-Z?LIҁ4 b;=E(Q g*yQŬ4uȱQ 3#M4`c @^@$ӊ.Bx^e_H qp~ЄXp,7CgWNab0y|c 5A S79)տ C"w,Eǎ9s::2@SE 9]5ܮ֋n>}H !zC(f~^x.wbiU=jlG'*_l|ڌi[2ZП,q,BT2yYNcܡ-GTkS>4ٔ`S0zr5DT-(?3Rˤ!Mcӛɋy4_dۗ^zSvv7m`BU(  Fa`4ARBlPKaB5& ek:|zⴅa3>FS$*Ra9vqaibK o^p㘍"x APO9%}efĒ p\+]\\{c6Ooj(olS "mHcdO.Z,"Q\o^_Fp|\ˬߐˬ_j*;l[>9^9Zz^&Ν~zPOa8_R(0v:_cMopDUU됉5j L)NPaq,u T3YǓM>dTv'HIObUޣ9۸'8TO.OfѶ"Q4]n:d_쮙od~k,d5v׺'1|K'_#t8/}q7k|AzΓtfMYA4Gz Ȁ(~I& )]ce2抝,}ͪzh%'5h5ɽ`nGʓ X0.tC+7Ux7muon?djD> \K}.cV-oC^߬`k[wЊbŮ]{0HS(=Dq΅e5u5Β a' G #\b{rXS[ebc;m} AW_a7'C nqw 55C*uܺ Rt ^cN%~>8^܉Ӯ@4`xMs^yzpUxM jU9=}OHxGJ~jz͞ϒ¡WD%0@ G>uL2!?p$vҵf{t߳08IKw8+: a!qB %h\˺Q pniE5:}=^Z)\!NcYSV@|u_,W]u(gbTgD9! 9H1 4Vdֆp>nZ>FZ5p\ CPy:v ߇a4 *YinK1>IQ*-ֆj^ Co%gF4%bwfw:"اaֵTQw @P[h4q4}О NɃ&+ ambNQ<4C\PP)cSK2H"qē2qv@͑fT"'kImS-Ef[VtwI ]ѱd_싦WiRCenny~Fv?kŗ;{[n"ANݤX0}oPyn}pШ?}ZMϯ M!+5OΝ 7+ ];`Eۖ>a펼[}+|9\JTBMM~= }NéV*a<=v)Ș@wʂ2Z<C&p &9{vLfu؊Ŗ WM2fn3#h"ԠWEv3a|<;^:RQ?iӅ E ΙJ(Q"C3&' [oA5ͬPE?yo 7~}“?ruH}D^|{WҘR@һԇTbP"wſ~`7LRv |}KwDt ]*EP2b,X BC*d g# mEp7-Bձϐá ?{H ˇ-M0!h FV.Z0E\m #kI)AWc܌1ް Z]ͩB͍tJMlj=o]J{BrR{m!?cJB]`>zk,5 `Յ䆊Q}UqQp^=%sԯu ʡm){IXdcKBF^>y⧌CRӒu \W>TKN(Bsi/{)Z) '&>Nnd:,bFS%t̕FFjBX۱hRL2 D$ YB "IRe(q)$D;"hf O Gz[kΜb^ q MMB W) Jd$d391 hgX̍-*XE co2O2mlr~RMffg@j Ʊ(dʩDRϼ\g`έrVv AM߼;-obB"ѯs_-+,m|t3f&z{13]q=|-n/+%1~ ^W}~Z>O7R_bw-D.|8.\pB>b_6r͒SUPQ&VZ8Jw[w !o(^r[$fH[ T<$t10O=<ޙrGM[n5d14!Gbql+p7Rbq"t2sCL3.H*I촠Zk*8Qq;dNJ-HUK Y JԿMAHBݛ~wVvi/-tpD1&Gg\|#F^8!/ 2WBI}!-'X Ooi3Ὰ+TǬhNP$8.2Xi%fi얘edi:|G#)4+q`{+,住uRkP8{GoqIvSbIe&fҹ8X"L1,C)p*` tbQ.S3fJUY".ƺ"c)4*\D̥W\$w5AXQqX=h>H!ȑ0(mm7d3s% 9bܱ\K'藾3/zR<u'ZDKs+3EYN52_y gR!ɱ4 rf5^#N@a:ֈHn:o 2,IU4L'c &@G8ŀ^z1Nw~e>.km `;i6 &+3Z>̺$9:3^$DMjD%,u"9hJǏ%R1F1C dN'jfw\/ B|쾡MX:b+!At(Flt{SjX.CX߱D I[(E%H_G,Z<+iRz26m Nh 8u/r^E(Ū{]=.zcҮ>U}Odv&ݯn\vSQ^oւL-d̿$p.W5_jI^W_B:9*Α>^z-RJ9!Sj֪`%HHx7aJNrAd'`J2WqC<koRBT:ge'?5["AeVD# 6DXV9di!-)"-fTqC-*>7߸qCģ-q;gIԛr ZjHX §7r,57 sE ֢GHuen5^TSK2#( `ZJdtQQ8@P7{MMonpl!9L0r$Duo.cـk07^7RRFmݽj&x_w}(+D{k +֏8GQ_ny'Dʹ@:nOgy{ԟ#;ٽi'Lט^~w,G2HWoYYW!לMٴksg\hej[ˀiu:[ϗ_n;2.R0ﻯn(86EB&0DqTp#3Ch˟} ^a|* %yf^~\MHؿh8%mJ:\?G~Xއ7< `%qu+XhK(vFg{xȊb&DvʓN%O.0K[&hR>ZW\>mkCvOS{E* >\ $)HwRm8N|k9xlv)fǖ[AI$%XWݸj-ZܑGόΩ,?Kxllp5k s,Q ǒƉY9T9$Đɚ=Θ'x}IG?k\bP;V16]KNBd@3^,^l:WF`3ըPdƢ/V<{}|]T'&`7v<;9 }Q_ySցas3!g $) މ0i e q >#]PD}$D0QmS*aA":+˨IÄ0om5<)ribFԕ/ 3pa(5pH_{dAvLzZFzTWX3D&K;}^8"U07@$η/c_p6WKߤ?yFwxG|IG|I9lbR8#E2bΆ`v(fz. `d-1pA--/<о~d^#, z(KO1J/} .Է~|L9*uS' `Ǫ8i|K1gk:xK1'2X vW}͏ye]񎓆\{}Isn{駌x{ӋҚ쳴YZ:K{$:x7T69M=4t 5c ,,ܡa`*""6jxnс/yw/Lu;k㐅v;Nr(B#}m!j<,W|zW6_•%,w 0+$ĻkԘLԤ|_dx_9BP.%hSs5F Px>KF :989_/YyXtN1@|t@+Ҥ;YhW!Wå2}_ggiw:bXCSq\sQi p,lj0YO\km?;=831|5%`>G+]\')8Vem est9zpg?sKxdP#Ox]3+ǢT+oڋ ;Ip,ՉS5fNhV5hkd@;xyw*Fb)*Q0ؓ-dPKYg:Z'& C2bG#qjq$8raWmCVtTk;6DXJ>+M7# !fM(S#lSA@-0Ŭ Q)@kSs`ٵ~M5ڡQ %>rDlGVJiq $l|0UJgȀh18eRzvm&$= 3Π(Dx>oSY_{Wm@_.>\"`ٛ 8bHņ@IiFy ߯3IVxwŧdYUuzPQ3; w2,?̰@x,IhNe!аQ ʗi}z7w:I~^A@@]'nt<.F[tw*i1 #  c˂-?_bTU7+0firkvg%C[77Οn,b[ 7횬bX}8_W9FSMC]4\E7=PᄑA<3/ډu}cfV 7gkO /]&t|X*}1PP [ޭ[FYl/6 )1-`yeQVd58;|F4٠*ӉHԉ(8y'c΃*(es;֘jm&+I)!IìtD^z3s|)!k-,(<)csQClfdC$YG̍Gdp d{a9N ba֧xH#hd`YbA{,mzN[×R!ȞPf=M%nS{t =pumwdNd?'}I.] /{MfeI˂Anbf`L{5KM4t)o$nJ`~_,[oL[PQZy**S@C9߂2ODQ)j}<PзuQv?4*'S@C9"ުZ7DSE :#꒭Q֣+uit_QCUR'TQٰ뙃KG3Rޚײ1mV@0ڱ79E6퉕pm,fB7|L~%|׬ή&(|0ቢhr‡6GT'%tFw XtϞv28|v~vT\ J9][@f:G>.\eJ }ᒝWY/7O~Z߿?x nB@bXTqq1,#9R r{*e^|ȼuJ$O+_ЕO]Ue`e,}43,V BC*N"Hes-s#{r>T8C!gj1 6 0|b`@Qm3̙_d@~YW1wF{db0L@,(#sL",wcH 6ՄdW GQ9@#sZZqNS =?`gࣘ(!I VQ5. w,\q`5!FpA|8uR Ɣ!n5(׾6BJfeP|ABHq'+w?| $`Eλ7(pO uJt `q=s;Ccn:l Pk`ZWN |@ IAe +3=vaۊ97:5&OfPQzj<C[>37{[FaaTjx'يUlyi{a*fh. *4Mi.;t TA[TVS8Jꭄ c !͕͕xÄ уDz*Ra-N4^:xhL)`{#xJ!LS{2E5"%'=rdb%tTk ^67&!9{<8ws6x/Q:pN5Jn4fC*c"{D ׋7BXl=;lw?_ rBQg̯NK0՗,-`.wR:zuGLj-Q,:1{ -']mD=>F!]& ރZQbUwo,U2bnoZEcY+59K]$! KZ8!aIz V%FfԮ5Zç;= T<h~T Ov B+nFO0zj{s:EI{=<Ubwxsr;h^$) mEOnJ~gmЋ._^9lp \C%w*=x |UoVRED(@)$0],b vX~7(RDWW21%(kڲLZ&h,GJz(lIuя@1͓#凍р~Ɣ (#̆PN["͸ t)Ze-61|xL!piIc;fL ^ \o1G|s,Ѷ<=2IK}SB6h!%dQp?]uj2o"d.=ʾypTL̸7g翘̞o>}3J7߮?xd*KYeNq:]tYN;T;NAr欓" 1+7NdyՌiĩRF(AY~`Ï4[=UR?/SҔF:{OH [+:jkCg0)e%x]h5}x"14;w_Z cXs CK+b ͝Ypp+iZ-O5qc&AF#r=2(rK޻GBV #rfdV"e#SMQ9 h9ͽ0#UH`,rDxqJ"Z+/`%XX9(imMH :JdniD%ݺ ŅdJn0RB) {ZNmy"}x*il&#Nv+(=Rδ%&r ^00hSrѕʜL@zuޟ.p,D:?w2tǖx̕`>@h' ;D*ki\+PCΥ\IjZN#FrLj\ɒ`j0{Z$H򬔻{q_jj5W `\J0逌ǚ;3 VbDg6&:n9 9ro.M5@j^Tn%펻1IJcPڵx0WSjh*-m-qCe噴 sk!f0c3 nL0-IBY [pzf%4-Gݳ]IB)G JqXL@(l{0#N&b")+*HO8,UZa!`9~3*9Rf.I-gnKqX4FZܻ{\s?0P7(o͟c N䩏RMR/~Y,p:ӡN}'XֽlQ(cQ0qÚQ,;}ZW~UR% MMqjnMR3 (s617%eΎ1q"(6!MPlh~EzJJ #)(AxSȩeWU93&z3gX忤s'1f9#2E9q.5!>=F_dj(r|6 ̚JO-g{ {é>k1Z"Qa}4@jY!I QjGΪ U`!_6)P_$#5rRF}:"C#T(V]]Rmtp˱cCRN{5?E`RBTV* %h\[c *skZ H*j3kImz7{ީV`s% \tI8rSLN\Ct IN1ao9&t l9uRY:C!!9m dDv%&]ZFh*yF e2J)-J2k=* ^#[|Z@Spj-Y2.ǽi$W|uX#Te'IR&I㇙$U H%|4obƗ|Q"-<2 ރ6#T?;𭿼&_UoON]օoj&-2 N*UVN5$0EIC5&Twfh\jOusu[AJjgu T<ƪ2zo׸SpळN) \tf蚹>paʼ$vym'+I."bf`pɽ lN~Ib-Dmp.Y<,ZD U߯Ì_cu?O?p w rK}ÃdIb[KO o#sǼ&y>`fuէWm}uqDnEqzuuϓS+`R RI2Q\DscD~a%fy %vϝ0Cq !%H2M 8׹4VB1\#h2 r\˵V@!\Rgk w1AWD\2ZRp`Z}NfkuvK~]:Aw«=@ x.%q_v&߬znj}w] -wjŭ]!毱@gcb67`:**f +ٷ+o e2#fX^#ۓ/Lt k֫U^fԤN%{}1{2ok&.u%_l<.B;Irmz3Bf; d.d>/\?xVԲ)uIů.>g8B^;L/n0ܤ IYf]ak(穦Hn C^$\}.gYd߻'8Y?|ct3%m9ްE@%ơP ekCa@zS~LVB(eˏBW .(^a(HEC. %R~ѷ oN.]R*!sa!ӜgƤC*MN91v3ULd2rlCQc_hy4i )MU3 .B(n Q-sd),UQ R-$͈Z(&y*P(IJJ "7nk˼Zye^B˼ɾ2S&Zu߄k]P/3I6byEr/>s.\W6Rjؕ FeZ}NtYǸ^LOVjf~ E~IJ[#8Mov[ϻZ&=?; q8K32!\ik( k 5ȥr>:'X10fk^?I M" ;6}jJ5 ~~_bif\ Ma"jWL! sgUC 8\8g0vsbi3]~~ͱ]|sN~Z.~:E?|B4qhO3|8qZt!8Eگ:g._AY Ni3+BQM^" 9 @bew>W T+7xXT𠠊P@vb}9dy#~Uj4Aw4hEA%@pl9q9֢T FM%%X fqi1FjC_ESjmӎ}}->-K]OBD;[גT! Bv e|>+H7M"CƙK%e+{` w=1 /{_-X @YOh]Ɛ!B4i\Uag Ϲ.UlzWRU-.XZ6 [ IlwH /Mɢ=0𳓴 ʲ+4fw~&֓_L']<9h$m5F)_P}9@R,ی'"tlG03ivUimtL㴳Ue=!F&wgZɛ\B !âV.Ik2Qy5~5_Koڰ[6䊬Dna իkvsEw_L۟M"떎,1pknEl˼ٽ2gإ*VSpY[fV] \ӟ/7 'xǗfQkHC\Ect A9x 0CnNlU[n;Spg-7к5!GQ:]6g|kX BT'*֭?>)3hݚА#WOPO7` k8 &Ƹ6qJ-6)x!Gu`(%xV`k#MY7뭡@!tӥXVaiI|0^)ߝE*?uez.DuLMkD^ _:&}e$]SE #L{ᥞ[JEK(D/4T}ɐL֖;1Ĭb"k6^8a=YӾij^aF]"rfթubJc<^c$,JW$e/wK +|q\2J XLeDœ&vEe)Gvyl!/ܚe{7x ȷ),L呴_@ռ@߈,ҒO9!Rk")yNjAAo1L9AǍFf:ߗ6EYZh> [pZxak{Kn{s߸֔wFh8{4hoہhcRr:ouڢ+;U1bon ڂ WznV81Iw~pU|eίޝ}Jg73w|v ΁W{(<EMrݮL]~.BJH^k 2F-)aXseujSCnxʔA@R'"D BL(jb@\S D*Ng2mV_MEXPe,GD H\3R٘ɁtRHc:w5SRAD;uNZUVW2?O?c Ugrff2x`:YpRkOtTm/ȝ`vήhM|Bol251Oύ"n ~Mz(T:'kO-ӯI 0H%vJ"v&2Q\DscD~arf94w`zJ AC$_m H.RAH }7vG.lExZ8Pa+ dB6jKT:! 20 A&"2U:#6OvbjB޾uW҈\ Άsms!8IRS~Dx Ls 4A4a,d<'v ~(%P%`AH&87\mlVv{5fZ,=5 `(!9A`Mxoc4@:O1&Zм{Q]Vϴz%m_;ǫzL)#v [pc{q: &0A(zd~|33qh) .v~c+-;x}_ )ܟ`Q#Y ^|f 8" 56=z'QAj#S&TT&jch ؼcxncŭkfW󘥎$zGIeGs9 7vlmx\ A!=h&](5#fz+Lַe pPY}f#D3<#L@  vqBz0a/n0 4-Iikƽd^Ҳ(?s,97]v\/ZEGї{ ` ǵ58k uNX2&p%'VJDb~ JI/W?~}>1 0MϾ_ߝ(!C`* $4 ZP&R䛓`5#~K |~bɩ:2nPH`+CH9 P{b$X,)g"|sIa\N͵6?&&?I/.\ W}U=AO&r)(:Kj5Z;aede[QR9RN~q@u>y?Z̯2;D&E& O2g&hڞ͋6'+5t=\gэVZA?"Č4|2=:E^L %)FtSփP.yˉߗR^IfRM]=V#eHJO  PZQPA20Alarn]&RBK e "&FtoLcy{fڸJ@ { IAA0c~`C6"+|̞.CT! RB6#P[A =hj1ĀI(xgIT #EqZ9-$%O ~ҋvnc93K@wm=n8P}jHa$9,yqdo! T!9ܝ oZ{>hx H 4fjA!ء;H,pL}ܼ"7{.{H,$M4y\sCaT :h%8NAx 3]_)[˳_A%}\jp.r"gjIAלZA h3V H׋UP:B0# Xd$Fh%0Cq6/;FdL8 E"pB-*䑲.\qEiB@5$ …:a9e%|J1=WM-#2& uS„1DPpjNYo ST[NZM<2N;1 *k `2)JһdRB٥!j ' 苫8.lN"3|+tZI;&u]3ZzNͩDͤ+G%w{4_?@Ta v|mU2>]r~✕d jx {rWAƏ~]gin4hE)џ7x˫3tƓ\^]^"Ig)-?B^߹cі! ŷW H]mg3W.pb}AzLʡfmY30\ѨV=j*,\ٍl ,O *驊48OLAz\h&~.A`&j 5x%۱C}ЙA4$0 ٨R -H7jOWR&j̒ᜦVi åX%n \1QLTϝL l FF1DNa ߚЂVx^G_l~Mt_7cbfVG"nt:F\:!#Aп{T1y4}u9<')GU:`ȥ;a72M"Í/ P >iRQT1آLpED+Ak`(9~bK+KQRڛSFqpcQfD` 9mpȴ-6z^~bnۣ=/O#ĞZ1$?C 8,BwDZ[Q6Qq?N\Ln&jA83UӲpևbh٨(Jf(x23h*jPFaT\jȆhS]0Cq(hMGST*q8%@јIP$hH\6(=)͙1ڔ+N7{T ʸLYJ#uRij:__-yOC\lx}O1*<nA\q/iChē?Ah /ю_~]_/~Owhf~wMk|ЏWI`8x{yat=x ga~wd* d,Clc{gf=q^9*O2y2Wr!Χa\6 s[Ƿv[7p  ~KS$EEX 8"I i5̹^B[kWJ\7'Cȸ4\P! J:'+Ji9JGxB-:[!٘M=:dwdyx^ ߽=:o: FK,"hjuSGZ+A^*1u1%iFSL---W+\ZKk9Rfkr ]A% (W;Gb`Hz٩<Kct~V?U77/_lo˩s=J]EǗd}}PH!6U%Z01kCBD]s 5Qq}r )VǓ|S0V' 5)r&SQ*,7Yސb;7>߅Wwrk]T;ySjkXI Ԩ6&QuhpoIZ=u C P|NKȈJ[Wu^1:+]&0tjf[f`& m_}[!|ֶx 8wp- =J(IRZōLֳH3$0tO3H˲WWfؕ]^ȚRdMk2QYicZ\u%*8,TR5]'i"&93 s)CemUඦ97`#loLmhЋ O6mzLӝAN“xu;RѸ³!Pev~a 6f?fNnɷcԁR"GJݺpHuxmLt%zsKQãʹƒ+yn04#)#z~ 9x|]ND o7.‹g/|Oe:G8Dz$/S[MllO\}#(cdH3!Qfj.+{Q\%YnBA<|.~_sqh.{{3\=ZEw,̃\Wh2=O\4O7%(cp3jњ`9>zVe8,=;(emʂ~=I7.]dJYnr9[W Nwn{$ B޴[ICև|"I*껭S-9vוP{v~jlmB+)J7!kǣ%n7d8qBQhFREDR4='i0ty$]<0j]u&`&ǎG|s ^Ns6DާEh]*ԜJ=f)'S=u4 "d}V'OɮYw,@V3]ZS*S-UY]z.N֟M4#l]){Y[B-; \#h>9n5^Z\{1c̼xumQnƷ˟3V˒[;x~W%rY{r)!JYnּ(]TR??dnpa22?79zIz,'#Y8Ƅ8I_Wy]]uuEu]GMOTѥT&. V@ؠtVx0˜:Gϻ/or L-7yNn%IǸH/^r_hkڤnWVO20:TT{&jqh/<0 OM6&&k)8b+%Kϧ]R("B# "`9'Ix*rKXR\2m!Fm)%q=xkVBڼUh{"J3\%U9Cg8 jPZ Pkr)ۆd xt*}y^pS(^P55;jP܁`ê GwwRsYS5jMo&fU}P8v=܉p ;!fr[u.+]PWJSnh7z {>֐>6D˩ܩ^>k{r'.CCx!Xaτcǽ&akOۇV Wm8W-:e6ȆwSc.j2.8{Dm\~*S@מMm,jrgiC,4;"oA%g{*zv^8`(q,,߂+-zUU[L#:Rfhewhu9#R(~Xv +}Q_#7\O?]nR't #2uT OXr"SK%sƘBP pFjm85 jp)/TFZ(:܀@xXFd$)kHBݘk)i9.y*th6OiNO<~mXak:r:;7~s7_>N-%/KZ??O Ksm]oGW}ۥG^`%܇u RגaR=3ÑM#p,q8]zu՟?t O檼. _2z|X? 5Nz4eY]!u5ۤo ;j$g@'E%Q BS m krA EAt%قq:c0+P70VT!1tʿ70x |y-l,N`N_D%"*U% *H0BaSY")( 1,AxyC,g|޽ @T7JFN[`z6}M${Y qg9K3BHf9ŞZAsm bQ!/ޒ"(l+:6MȡZ4:#NRvSƿsĚhC$01RrX+` Ub$ ym 6;H5;GGʞm@&US>YoXd2Z6N5G}B S5_Y4vZbBu\ScĐS7d%bJQD-&)\FҒV#E@rsGH%=\Uij GZțZ>~>8ǤUN9tLY{\[η4 BEz\,\q>_>s[`J֑soˢ wM1 +8$?ү)` WD\,HyC]kV~ؼ X]HQ8Ǥ)昗E\DLFF'ockpgw4{?\/ivn^Ϟ]/urKU(#fjASSi`x*!/)GqlA҂Zf ZF,z-Ú}wAaw̸XZ)fœ;xU5-+]I6iTjA&=OY.9˸Pm3)sp*BTSa +njzi+m'+(! g1j%vHjHAčDcoXJ0o ٌDJA2G*T8(5@=fX8$.aA*`zR\* $, [P' * $sǫ`OKFILXdR\Oo-zcF\Q'js{Gu33kޒDL%DHΆqwXj9Gznct=aV8wQ6 0t7 ˒vb˲HFg6vUZ^gDȲ҂{,4gKڵ ~nff旑 <.*ӻ&,mHj'_|u蝟LogtQ˘s73#Ov XteD{4/}{J*0Vbgi,B1.i'œ0- ~]0^r̀RǔֺUeL JxOjxz@G ?Bȥ$)减Ic.AuG #RgY.MmKֆAѱ w[\X:e $HwD:_^"(͛zIX?\T09:;ya cO*zqo=Da~Obb7`mVXe]WH.W3 X`PL5Mxl;Ili:rWgpܰ'n`gA{B:ftFsjN 9ew&$GjZGߙ]?brvw _zW-a0ea?S`~i 9^%)0eR`o;%ͽI<ؙ~a]EHJ7x(y:9Ψ^[U xrL.)gyIY'¥⺒pӨx@vTs$(y%lʅYmxwth>\^L ޟ}YScW/6Ȉx5Wcéے I:_2-w [P eͶT1ZGíGIԱLYn ŲX[%pDacF i$B{km}wC$y)M!wPk1PãLEf!҈JX4 YIJ"@M3(TMd\'WKw|cGe.crU\48%;>= ZHh@ m-5VZB`i-$YQLhW}%J'-ůsOF귍i=*X9LQXO5hB#FXʕ#7rek-vJ@ irq[96h\jxː:i}z-Z5nv`}r{xO42a'WCYP~5p3Y%??ߟebɁ5XGO?N ?__2\D"*'Ud;1oaƛ0[S\(t0,+t& ~i/}@^L3#a(y` Ro(PƁӥi՞\8{6>_" ^H7VNVOO,I:X\|_=~59W"){K޾y|L4./G?^݀27"btƓ"3o@"5H]!ZPXB=8w1|\ PMUp1-V!yw@3X2<'F6S(J#N@GkשSyvѠvaoLbtnk8L!MisZi_7qn0y_~Oݤ1iV"qom2㕆%I6383bGeF3w0Rٹfn5,3sl3KT4g-ㆌ"h4sHtؠ' GJu7Q+ʑyqXMe4D8lۍJ;t1Rߠ|j!6G% m3&{N#:kJH''T6EnUr2PeΩBKJ|]EU")G55ut*vS*[ )ak W2B=7 FҷBt)%c ,TЊkTs cU[ "\8͐ cUp&u P U*F6+{o9jZ)z MLu}˺.q:Mn?0Qp@43+' % ܱ"xHP"BfJ+9_ `8Nqwz9a#"@,81 jl`y;pI<75 |w `VpкB8[I-T[qA5GDtܫd%>P`\i PO8>@H9: 8y5vBNP"؁ƣY🫿"Ի*w'۟z)9ue]!-6*G`_C]otu.(]do9.1{#{9(Cd e:#g5!WΓTn$Ê)ŀ1f K>%޵E1USǜ#JIg Qfͽ HyO Nc'\ m{ I׌1j&2tĕ<GN4gL31$9Oo8m {{d6/,`?lQK2IMxHJHY %NDFFfFȽBW@ʴ`*p.[#x8JP6_Jq}W&F #m7dx*fRLL2?\U_Q>3;S_N@FKM7fؿyc"o_T7IFM3Lp I"56&Gsb4J#h"pe2LbnOͲVxB0j\}7 "0Y2P\> Ū2VmMClxKlV(%± 4I.1E>&$c!8M<fLb6( aI p\$+ G.#Y5+[Mhl5}̈N-]* ")rjJ3Ř8ZJML1Rr`x-Z+9/A R"" gYc zT.:^GpF :r#A)vr5  Ƶ$v{roZ,TK=^ԋ=^t7"_;;f-=P}U l*8)CáE)9/ؘY<~vȋ؋ȋ؋"ᆥyT2Hą&ŔEH|=$' X5}P7Z]ܓbwC, \Zr ʽ'rBފGHM$絝;ngL`wr9.>`^<' yZw%;}%J!TʍWQkT1G&z[4NNY5V:!3Z #/;46 >'ekO0\iPC)U Vr[[?)lKϾ_<:?DLqfYD5eq~ߤ4_sitYKMzoK#x~3=6"3vp1O?#cNCX\CFq,]_F{,& .NIZzĔHbB1(+sqao+X Ê+5_jD[Rrcݕ ?bf83-n i9%t8W6hy1;x5@kO}v7;]F4F9 ?~~P WG^r-D ¶3ual1BXBa+W&U k|WѼBM"]7.q~ۻۜ2799Ƨ/cǮߛ.._B d-ѝxx|Tyw4/HWo[Q)B%=*24O#.mE@yf'Gӱm2s!iP(WsKY,aaN"D#M{$T`N+I,DVOvQHh)ľc'5 ::CxDHԪOJJP|B2F/ "QpxjL=Euպ;TI]5N獢|Q $؄PMٞet >קg'M4-gQk|L\IY?BM.;2>1N艨$*\f $ǔ#,h:ʣG٫<}+͇s,ϩf16I0M{HKOR#}`xcy~z-#' `6\od  *(#I@$@9A@\(ɜ^q&thY ;$D5Q-SB@k@!SL- R")!KLcyln'o4E䳾jℰ1|ʒEGx瀇 ~$Z5u abҦ4HjBP3GRɮDg%Tn*'UcjUnLbԹgߢ@)}­\X-򿶒?HJXwV~̗eEٲu(1dWaH{ >ep;LQh^D}1P0dS"N,Hd3 (rN =1'1ZQkj97"%](tJ G'R 8!rDXt2@s-%2)H3C3@@=> 4TJ'6ZJ[PJqyMIK (z!Q ƹ6 V\qJ O Lr#vaӖL+Gl%~M6]d*YR}[#褾;T*h6RCeJQ Ƚv6PJZ~!VɩoL|=8kp%Ȏ&--`@f2JiY} K9H͈0]W^zRKFm>Zpa?hBaB8%34 AtxeVAw+]BWH1'|S u ++6u+]AiZ*NW ѫ[cn嵆ihpuhm`[M.V͗'*x-fv{k}(<ˏgԝ}3-JnTm7j0lNh-6b쇂"+猵WtUeVA1qd DX $]'(S!6:.2R DiXe4(^Σ|~]qtwv6dzۧ_$d~5DZELb_sJB Tъ k!M݌ RzC*(naf.S`sA8$="&'LSS̉9'_ںuag4^'!22|Fw HCpIuOIX-e/ņ%XIi&ẆH8zVK]JདྷAzs:3Dt2H:iM8\-PBO}ollloV5 =5PEJ8xW؛S׋\M4ΧL8U/E=XvQrx%6S *x۬)Y!b_ʤkOEG\qx8׌.eѥRÕmq(Q!ꖌ@~RO  5,ħԧZS-\pk٩&P-p,5@;Ą3@uH J+VZn삓\`{g)7&q08^HJ*mҀ/d xh i/Cv&mi/p@ ߢN1XߑJI͌+&@QSZ'<A8 IR$bӾ|KI@BT-Q& ާM ݶ@ERsүt?5߈,J{'5f ڲ0`/H]Mxo"{gJ?vDlI t!nO~QGhiؾXc+ R=g0 J8^r( !im 3}sz#8yy!'#`d/2;NNMT uSՊQ4K&#%ST's%9b{уU>J`;A]OՅ*`DĚ>/? X8ok>K=PژCdo$'q|Ya>'Мo׈4A/;|{j%-|y:Q=Y Pg' ĤRlGJ̵}=T7¹>[D3&'G$d\D`{) QDrk\L'a4ԟ6F\.~! i6C7Gs2^-\+_l|gښ6_asiWegk=vjn([(WHВeK 8h|3d1~zN煑<[2Ui!cÖ8t㍟d/׿^쿾g]sC|76\L@ 0.ɹk7;yRUoVQJِ]QY%YϜ[vp5>17;:>Q& !'77$2i+)Y L"TԂ"AqgRxQ)ro&20gvգI9+%\SD*sɕuPioidkp⇕hifP-z<5F7Y\$1ۛ ;T=ڸmq nɎk4$ ݚz%32M'!2SbATA ypR~-6_OI qi[=͆]l/wwl(Ra޳_?vAEA(׆i1Wgߣsn]0TFh|=迤A?|z7la>>;08VzM!r-ƃ )E7ʃ>aP8z%Wu2Su2![魝rڹ\1|^7rZ|7L;i?t4\Ś:!  9bk"S/n;%QfI($ h娏*W0ƭndYA@;(e^<z`R.O]3[q=yi$gAiX35/5Bw8͖]'I"8.4Œ_I|xJ:@8򲖊: AZCgux(W8g`WHۑ' $Uo[*i߃aua^}0MSiCZ@er0dQ[Tn )9tCIw4+y[-gc ǞT7"#WC*(i%զ^FAP * O%إLdTF匄|p&yWhςDV SyKML⩍r=p \,!Pk$=c|̽I 뜥̫1 ̙%t38~;?X0 Q̏H2JJn ZT9 i~Zsg9XCB"%w>7Z\ZQZhI]j}dץy4VץZ í@SSXbLM9? ".w52myGT8? YfsyO%%bYܼbE{ViE{Vm%сj'Yn(GIG1H$\$8MQ|W`v0ϐL-Y4}wE@1$pi6lLsvW;nB㇯ȇ(2ݗҟw^tRұhIF@9e|.Np3Jsi4x&}9WRT ^SIkPFi^ "jDRQU1F . sBlL1<h6CF]i|T }ʼnj ⪰W +͐5mA< DI͖]#:ru= \fhpYI; 4ږɀ[jztQW)&6Oy+) 8|` d#VunnG@%? !s kՒ`н&wI,`\,syur^Gg"DpƠbUU(Qzet+!|hq(@iWvч*eupǛOMtKR,+dΡ V;'V Ϥ-P5$)Y**'Ū!pѵ@ 5EOQ* " hjHQP%L;4K@dmn_IG5%:/ҺI̡ xAÈOhTRz1ajBPbmA A3RցJ-T*fS,,2(ˆM3[Nv)aP ]}% 9XήxMoX5`<*x$P!W}~Z˺=S"I{}icPu =ͮ?h+Հt_>MV[=CϼŸVȢɢ$c/bWi}+ qN(WE-}v4ݱ aץ)߉u66lk>l8t" ;$nCK[ Pݗ>koa4׊}Hq0 i@PBtHś/ f~a7-@:>LHuY/a#g&Tq~govkręb <1MT9z?vqqy5 6s)S8ibEuԆlU!k؊=]0Y JfB<@Hh*!<-QbRfKhsF>ď}V΍'L3hx]5o+\.~WO[OVbCB`7-3mV~a;#VyS5bp_^\x*H)Еŷ=Di 7!w&w.l_O7>~kcY ji|f2v%*aET‘C͛m( Vp.!P( X.<*ąYz3wDKH!Ν$rڭ.PÚ7o#:VIǶVSF]fmoԌ#.LS6 B h*5@ Hl+Տ+0[-P*EV@ՆJ@nzPG<Vw?vkX΄d\8QՊjx8)m!! }CQ8GP҂3LqU,m* /7kE"s P n8y#w7&-Qw XT|lAZI~ -.4}ƜBo.NKoOg A鲤2$k;'\)-y%V󝾹zWqVb<ݴ% y"H+C5Š4}Fv:y xڭyڭ y"zL&%WGH,#R@_Hs,,jQg IbH5 qdf.\WNoƷESп ÌT 3'w`FS˱H2?Ď*TCįlT Mhu+|̲7s:PH(RNn:f<5ETeZ.Sŏ׿B4KrP5_M8%.j@EeC:̅ fc?K)XZi2:ǽ^\l%nbF ^jN{ޜCAJӕjsDJ^3(*S%[6xfJUŕW h+m*qZmysuge^z:uE@~GP%/G-(cژE''C4Bj @b'/{EYR`Ĉn઴/+'9 q-M6ү\L5 tCrZK]a l*wesLhS@ajŢL!PwyJo|Nzç!Ůze&3 8۟o?#q}vQ \ Fe83uexYؔFf/.A[#T rr27E$  }3Djε{ꞷҢ Ң""%R.$qJzM4:P";a&u\e7y3DPI2˝,mVF(õ.F Yj |4V{K'輗^ڥ.ڜBP^zmpPF+G.`0(OW`ц X 4EWPL&Iv7R˚Qvv';y3qo~ TD7g ŽSx6?~#K[B}\VRwWDw}ܟdS2d >ը(_-5E]rug '1[Fw9jacXg. ^}BoN{Yo+1߽UmaQ Uhq Ҵ^UvfY9$\RLGmێ.+%K*Hv Im3-,V=~~tv=>4|2|znwloR)Hx9}Oasl0IS!tڴUتMOb@hkGlCC8 (Pw:)YK5t;gڐ#t`VX(ϵWmẈ!Ap|TBzB@*8"9aGڱq8b;w- 0NxJrRm5Xs*\I kX9h.Drrn!O}"-ewQOqv=݇*a\*;>^Qz7[):vtwՉؖ3ԭ8?M[Ͳ=gLyǍ+l79Crp"a80/l3^i/~ŖFjFnc8bU,XwÎ@>B.AK]AS7ozSwgs[qQ>2n~bpn_}n{umz~p6=H6d[HvRTշpW$M=we0BD81\\1?rk mԴ< >x:pTriӾpLΣo|{hxc N.͚/9/[R"ûXN&J=P'M+t̘`4F᪊CEl9)P&Q'ejH*q騫zj6qia V{G8M"r㜀K+YI,ҏkDi.) iѝFeyW"Sh *]E):EaTV$Je =6aE(@S#8 AQǜ\"26ʝȍfn'HevMT3h䑦@OwlQ D_ד9.Tdu] v7/ꀿLQU{TZ!"m]Y5Jk\?ǘTſNb0G=jm| W{o^}z[k(s7B P;5x K6S:U{Խ x/f]E*-!.|X᫜j1yu#(zܘD|*4QGV*1\ f]ԊrSZhMдd)FE+DTr4ᱡ7H֐&m.2aQKX͠goeg"(Ps6'}4H? !*f4 <}xL9O eIQ:վf!>r{w]}-Y`FZyI Kjq,A߇9Js,BHsL=X3XMJ: \ת,*k3Й1R1J}<ķ!ykXwkC1Ԕ>v1ɠ\BT!p`d~Q`)9H m&rv*ސ޳\"pwʂ|Ohׂ x0,d2ps4R I/rH;a~8|StȽcTz8}3h-K'G8K9߮N*z*X&ԬQge :2~6ΏuoVGxE,߼aΤy49\i 4/~ a3 UƠ0ϧaR&Č (vV]kNIЪ~.i# F<ԔC ֙d C8p L^c)09#Ji3y0ɓ9 %QZ`(딈< (I-`$p+k*KDNw'M&ZKB+H/)*@cX7CpI Fˬv(6%$-B. yBha w(a#J ZBUL|^ObUR$ y(Dԕ\HP'a^SU"TXDj!(cmE9^Ln<.ܐw[A`v,72c ROK;"(RE\*TͩM"x\Swq !FF CbR>h)t-^cz* TIT8e3գ_-;М ynÕW7J#) 3t Bxy0!bhW/]?=V}pvA욌BqTȡз0EcY"q&Q=5BҎ_׵4P&f (>ћK8ȴ?-Q[V> F2MMڸa SotGȴ>"<lHKDG#]ZE :l^ t&ڠ s.73qc+ T>1C.%zTDwp6\L\w͎[<[ϻ]]ޭg= TL\fkI"/n+[d8w1[}i}Z0eCȖAw9Z]蓢 zO @$.уI 8~Mdް̳`:fK/4}ZpE4!Xeu T!ZPV F3$&+*MTUjhH7PGrO4f0iD4]c[T'zü{Wo khtLR&8nO+fo~- &[٫T0wow/B oqANnyy'}߱L0?-j~?x=YE u'b>RQsA*ԽQ-g!JI V.R I7.=d 7LU|I+ѼPo>tmV;,ʵlcin}]5&IEԲU3D~qH;='\rCGZlߤ R9-h+(uݢp/6jc(W˴Tݧ:6 `S.*)VjzmœV.'LP?iYx,*a|ڌk&8Ǎ<~AiW vZTj~ŒWӹqjihtqP˺Y:d⟂CQ&}P:,*`RAYKɇ{o NSkp>D+RgXjžTYoܾ@5,2rIX>hM1)"8W5=!&S:"`8:2!9@@Y58.2J^25-ioAkJ[!bǵ̈mB@DA- 6:5P~'a5i~?`\"d=@k,V!DRy-e,Pƃ9! \L+cCܙhl _P۸~t7=! RWk2]z毓uKIowW3E͋nuzޏj~"?TcLk_w1N{'CBe+_vϫOo{ޫx?zjfAjO*NABCF{5vR+yFV61BІfNZTY*)r&p u 6(mEFe £p8;VhQNiޞTd nT.\۝j/2ϧa\ie(TsS2YbҰzT(;$v٢زYśFxSb&QϿj84!a>%$ $y 59YGү[+)Mp}{Fu3!Y Ǯ9(\\; T5f ouЇQ?Oj7:<D. N9ؚ1ggC92xp4`lI:>|>NW%VHW}Lk2PUS6JGFDVE#*}."c_jq5_顚2ɉk<.60_G- u`ߏQMG׉O 4$$e˟R`%Lj+$z/7.9a~"7ShM6$ZoͮyhhGuŁr`cצ^T0-h(/'O\Nyf3/,Sp[ xeY*l޿_YrT?{WǍO—b / ٻO Ȗs+HrK3#=-نG3꧊Ū"rn1ʥ;l\1qh#ru@ِ 81zcM3E"K]zB`si  @Ѩl`\CR"(dJKGiAXBL Gy`&:*?@d% 0tmqKInD p)|ZXVipФm`)H(‚swA;_=0M)k5 r;Rhłsn*\= G}j]\Arof&_lmYsN2ZwB?SKœR UA2GlQɨe**4Bh6'>l*pշ&ʳl鄡@+. |snsÃUþJzȉ ;g`dUf m 2+RCnяУ&S/VfduTDnҚݘ<}ipƉzz|uq$$S=MgT&.ht).9}L>h=ҡ1[Qhɭ/y yEnB[Wv*voQzNKȘ%(u*K ! SP EtWe֒sDH+P9EI[W%nT=NuΓ.-E`XgJו磌bwMWeߝ𚹀*WiFIȌej4"v ^rJfF0Ifyr>°GRc,ÁOF)E:_kp"-d UI) 2b3ZYR>>/SSC\pj{^[t9PU'=a"AmB%BNw4W!}pWVH6W05߁kB KYPLКDJ,bp4fnL/0Q {5zY>*!m4zhyD`Nkp.gtFy!?* Vqy`3BHCZɑRZhdPIx\( !:tV\&iRtt}̀ݘ1He9'fxNO^JDSVT(m1YN^&u fmW8>ܠ]~҈^9CJe89o]-)2&$F0VWߜnN^V]\Kc(bՂ3mQUOrB^C€1 >J0r&;HCPeFGG`$09g)K1 R+i[eL<;/,4hy8xo4RE()?mM:)U7[uG%'9tf/Uѻez&fr=~(тrN1 t?E&cP!GP1Aj^fؔ<0XDl#٬Lk[ hRT 8AHz<%~F/ EZDq@CՔZMjoj)dTJD&4crgEcP{d ({ RLM5RK|] SRB;O.W oZC Ӗ/D7]Q|a ێ/*~+4oeei$wp}^7cquuA3VmڢS&n/Z݆/Dl XaĚw.[+5>ma5]m Mɦ`u.{nԘN3x=Vfc{nCXn۔Rہ-Wz-8b+{t.pʠ8lPy( ,Ӏ}j}r>'U,Scؠk桎 4!dkUvҼyD@'j 3e99KXь둏U24(v)̽4.NKNb\9#!e̎+ާ2Sx~M!]xSYaM~>2ݶc}[=l #L1T4 U^Q+02Ÿ [md=h;㋾4NE*ȷjeijfer_)ko.\Y^m*G~e zEZǢȠR}J ."-Dx^H._tYÃ|rqyuקe]ƾ[Ω u t}pl87!Z2`W|S?T 7ՖlLgww9v3{ÊU9ԂmIY ̧]wɝ^}/,ˢקbՋ@Tb{ +GR|&6(UʨY@4ȣ $ Z'M2 NÒHK$\>8Q\|92IXy$@1 Y Z5/Ԃ+,1=a,jYह0PlMpRbg?5J&OyUځd>41l$ C%JsAuݖ7#5mi4+9QE@$,eF>l 2c YBrB%Y>R@#)4%VB)VZ.+P_*dtOPpJGHةIfF/iQٽJ+Yպe.%|݁ |+JD/-pSIi#GvnF E1p}u~Mk:x}o嬔ggշDJoJM6rOߟs>*Nw%DV|_=~ϕ2~< uU__)P59~:|HzeGwop>|;+!]R|./dq?Zv .r>P%iawoҌ7~pۻo$_-\Z:K.tA1v K)1+ci>>weGYRJF6u%(pm&heL‘h:0s}iUK=mQNؔ^6_$Mdd"Q;`Lg,rdFz-e"ZY.D wubƱ8uh1z)۵ɖq3 m V $<,_HF-_trzF׆҅C$!g!}d~UV0c:q#Ipy?Yt2.zɆF?6ra3 :.&jeOfm[C2BFA:pe>;SDVgo +s9[}zov,@+kU/;bRiq`]; H&a,,$Hf85 xp-NYI9F\dW42 MRk)27Y2F &`G4yzG]tpa[#z7ΐi%Ħe{2jq@-`ULc448wT5*?ijw\+mZuZ;W<1NT{gjmW5߶JjѨN:X5GԻKm)5ӏ[qku#]؏#\5kxa i}u^Wݫ.WbL?RR؉lvhW="GjWG| wj<҆mC[1Y*4-&p$ {K2 0˅u^a+oXWMjz3r4(S:L6H$Nh#!#(f|Ihֈޱ \6VAugK=' |TzG=4´@)cbc$ʡOWl㨕\+8%,Gp_P=Y;{q}*[1i>}*7Nx{U٧r M=ƛ.f˖ @- WL+-Z ԕIWkY7Ў-41ܱo+:w(f-Z y{f{Ȼq,, hF Z/qCCJ!paC{['>2CuaPBb.AR`|U 'r][s7+,l|6p:Vjd ,%);Ti oCr(a8h| {zw91S!UGL[:%r=1Fٮöz'FSn9x vo3҆OtjC*H&@,_O3rQ+W ;pͯrhu9BnF8opܔg eár vT '}Q|-r 9H5mryV\fYqO0B"ƛn1׈lu;.Y#dʹĢz˼uKz_g&==߷R J -wZ^;%ԈՒ L J|QgC0S7>F*k8x_tF9{[CqJz[/NrZ1fM؆Hس/kdQwMjfNnܯ-\0y4-I݊(k`Hұ˙7eyCuSg?S l^byE5g#X:a׸g[]'#JYk&} RW0&mye5pBk?ބEq: O@c/yɝC>.LM07x7xK=qc+J-3F'RRZypp(6M:ռXp҇eNtٙcmT F/:i[YlV|oiLSTc˵~V!Kpls5RWSJMI/Qt^bRѣIu7k^whRCTvݽ7X]]Dk&cfr@z{RpRf&9wǙ!lYh K҇s-؛`#X *E T\ެEf:sgxYN<O!ќ%FJ]1ȈX )X$`|I0*MU)hn)|ܴ?F|Fܔ[1rˍ5w*bQTVQŋ20G "]c[6\iBA=jhl ډF`cP]y̘R6wY3>oSd|?rvƓ#0p{Up9w~~pwF {'+qK^"c'VOέD9}5˷bUsm~:p޼%cㅜ".+C;G6uB&w!D1rk߹Oޙ-|uݭDɕAhťkA8k$ձv[7Ud*fb}xKa+R)+cHl]wqE9f]7ɾr&tK|-vIM2;f#.|^ F5HnwetToyX9Xp$SGKcnU1sn-mjܚv~E0R!!/\D!O&ȕ9vATv;isiPڭzxvCB^V)cSp [U bDeOݪ,nuH eJU on%T6,8U]?5BTVo#X74ieG7 wl7k>[le`<~sWg>~{Pf2|gB& IjUB:_#$wcKAJ<̦vZ]jϕ"c Z/y.P bBc/J_Vj~fmu'^E=6; ~nE RC^Q_/\oZ`;ga?@M`c(4 ®aC2C{+k%RRͩlE*Tr,8j#u0?'lzT6{\SA ԉ'N{ S4J"2^<ܵ6;թ"}T(؟*Xp?[^dE_PXB#zz79X$;8zFPܢnlpAP{XVEZ?NljD\g* gEm&ak ֳCnwJ$0C {2R5K(^%RpRXq';iJ+ln* 9D,结ՃُXXmMvup@Xb]r8!ʺ]s8mIJzvU_ !/|e} iKX_;/oс Zlp. Bx:O M &no>=$9HT{Ip)H NDJ 8B'8ijB%b0%}Ey zk)읪]v_b"e~Apz6/ ʓ9#&!KɝWj}cc Ve28-\nۘP5RP1`ShrMҔK&',2IPbǒJ0JI ^є =y #b-x)!)xk4O,ӫfp Z#z;ânO8悇T݂e_)--88IFuQgP Yn9c'S}"Z -8p19rkp H<E2ƙ"x$ 'Gf$rw{q.հKS>U2.y3BxS# !V+5|ګ4-$=)5&1#%$y ::l ̅NךHsҭnEaErFz?Wge(f_}kM7mj渚OއY8unz55o5W' Iƛ/ b>'0 !~h( @]+x x>'DQṾ+<CG6> "9ʰ9A48GYV&%~I5ak|~2}RJ&áQxOP21nEJ)SQ?5O*US++I笖_%uzAW$ʍRHPlc0 ?Tק]/^^\T>b-]+7 F2XCcj-oڐw Nno#D/cu~R%n9h[LDcc\$;t&5tn#VPk@^VO)Yy#w5RZM]Kܒ_fzk+_k9 iA*5AEZ(Hvwuxu.nPZRS}Gz=̒* U_1p0\Ӆ_ǝɯe!P\~ɯ& Pc5${($†vO^ydTj1: &X+@2bBˆMbH]WvOٔX #tߗSСVK+$ٕ^[B;JHXIiK?{Ƒp 9JwmIyP$Çg(jDΐ==YDKZտ$bX(!C-!H!S$ $PD Les@PuJ7=RrQuz)g&ﬗdKɇ$䉋hc&z=z>{nsz!s&=F>r_ e[,R*BjOlz)b/Q}G5vz [*}'DAǟ00EJ]8D:%cg_!)I ABl.@IDQT` _bO͟%Z}Ö=ʴ IBۓ?Lۺk"-E Ŗ@ S/%~; ]PKsXG m7z-1 &Ur3leV%B@$RQyi&1U$"!PǖV*B>͖}@+*xTLіrgTW[>/$tuZ;jrV<lp}@H9"Op &&ߴ: zո+ǦR!A=by.u`0ʴ0Fl DTi$@Gi,IT8+f&҈H!C)TR``j#XAzIBJHSQM?8@ DHP2fepkG?( G6j|;U \#l/Qqn[U\'?F0'$ʼV/ ԎHà$5ZDA L[0 a;0\H"I+a 4$EX <,OUyM w]9A Đ e{1\*pcD3Hs]1L5:QI$$M8h}O2'A 'ZTXpD@)!APqije/#*>AV@$Qs90%Q-L5hdvA?2r-2BE i@^ @55fF 1&1`(ˆ'@r)>j0biB̄ājm5w\0o~F ,iccbsZ=ahh450C* Iņ=Tb,ieLNž 1 4]o@P{m@jC%T~[օÆyj#_Ca+Fs"T3eQ )\X~lU[ɻ`+O9`OK>hlh9YW^ѽ{1½clmZ C,quH 5Yr۾ƅbyКm@[xEPvֻf>< 9 T讷A=5P~Ķݍݶ~S{dP[v[#:}pKȻ\AYwGwr2$䉋LQ2v+19jiHdfu4nbrB<^,wY~pCyo}b'G5 w@|O;/B6L$M{>y]+H[Sn弣gu η[t55w‡hnAdMB8/8>/cI1GS|<[ݨXĄFg$V)t9g9L[g$efIE$^qTb A-Y d1Z,;-߮pg+iQ(ާ<6$ =a,2*]1 w6sՑZ]bWg!cҏP-gؙDpǔ lG$X_Q}O5ąZG6RjƍLJ˜cq R!XEBTiYR*RHƀC P]l8%@}$^ j?uuCRG kmȐQPhUdDQ)!MX Q,Dj A@'aVyu'@1e/ppBNhxcT#J?ݜ,CY٥wΧP u@NS53I&}TMrC wSxYMVdD:ZMr&=~;Y鉓QR$BILvA@m B$`y rVV*p5٣ >^L7'U[z! Ԣy:II /=C8,6}V'YG`,%Vؖ d*;PCІ ?|f=֓#A@Y9o^E`olk鴕$[}?Y]1ǐ!gX @]@=D9s2q1H1/6ޫzcA-۸ŧ1b$(k@0N(CJx2 wyLoy`cАs>,_ R_z2z:X~OZ~o偖mOհެeNE|8o1#n7ƣg}B*d.R>gq?L_tG0UBBKLu+Y`P |01m jMj!b] -۬m@Н mt~Zy }0" >` 29mQYiƘُmtl #mN{`_BXVR*r;/#%#F6m6  ØOw?U|U{fewʰZo߭_ B1l~[ɺѝor2zpD=9?HH$;{Xo|t 8J$;/^3K,-G/QWeD,M=^䅐{pFy=\7i7ZVGtM+U\;.^oeOslS#ر Ԏ`uBAڧVrJQmS6^ɤBq*ݥ} ɩH VQ`Gh*$6Jk ѿ*m[ۧmcyu5\GMǺ@(C@c0;A^'|L{ӓ u'99U޻ϳyr 5 ?~-~p#sa׳l15WL Ti{}rnf?ϟ/c96B&;rMteA- j=#mZ͗qThWJn3o*7FC'hRտZ/\̅Y$7)ޕ6l3sdRCȤ&)Jj˱l%a c@^S콝m83 9`t2Ze7,ܧ;3 r4`zoIUgqwvvhOő57en˗ګBn]pbsx}HTZGUܻYȉή,ϛ^QBN8 8?GSK?_aX'>$Ik1(wWx\zN!7{Vr=xajV/BgO{M&cRq;L2Ep{^2}:o Ñ}>8>;F?3ߙpN3; &v rƸ6⛇CߺCE%{Bg YQlQ8Y+uϤםI mRSV3z~y !5p@yV  ِv@܌%ᰇQVe UԲ Ɇi!.Ƭ~ۧYH>KZߩ'5L $R&9܊[ҷ(‹Gxjy`_H-;_BSI~0;I,kCqy.3Y#3cx/~^baqey2} UO'{/ -rUmcyncaquw|e/./ ^V0i xRq "s5|aq^Sd &a/ASMĕu֋c>BMP mQ䩀(PZaFiP]WM5U^4k31 LxLWiԔHΑ)+;}mtRQOx0Cde|+Zz"VZ]';:b=u@0Q x?ܣh35qfo6h( 镁\ڛXEIfdI!ċ)a'j"W0 7lz4!.*`́ld 'T-o/e7#`hVf3u7c[taf k l^˨,lAWChױᾰ"$^Xe+$NKoާJ+"/sF~bJm,ݛ25G>RL3||TȊ+ BN!b]CubYVutgVôjIHRp%՘HE瑔YMt D\oe(rDZZ+%J{thw0?6pra9lJ3,_u0xmWf锄R,b$saCI6d G&4TFZ >$i$7G>цX`V5,b(,nZE'ynSr oY ;if<7xΤNtvwx%5sU}UXEEE c‡J.2ANA2M+TWac܇ۨ$ھ65plMQ9/Y\Oziuظ\ڻ]5r3!)']p=h܆t3!y^ @*-c"]OmP5<>ɵ7 ~%V&E\a㯝 n~׽HyOAa㷲'yϯ[#nc$_mr'no,NomESMD3!)x4Q'e w{>x66>}q׺Q|׎ԁn<8Kĭ{O&gtx[&5ݸK5~xn4zݾn8硟orqg7Y5zOJ+;xERJ&!'SAAy0-&ChtR C`|/;x#ltwVvvנc/wztm83h+slh_Fz9\/\7<}c}N2z`o?23mn {ϸWͬ3ΘvnzͤMkn[v+[ˈ:?9uuO Zߵ^l˿9u#u~*Nۍpx|ϰ?kvmLZao9O)ufq;-ȍ^0Nd@KA;p3>;e-u64"1Xٟ'pytyp0cؔP|.Ƿ&;C{RP>,(R$E~h^@w~(!}Pc(/a,c\~?7\P6Fma+*D/FMY1[qU w=M&^W E G#5pJ2&Mq(G Q}vM}oR-W([-F Q !C:52w*익^Ӗ^u/Z~FTrZߟN߂Ymި4lPߪ;f4_ W +V>p9A6yZs1H?!E“TD""}!|/ O|2`4Th#PHoJ+"ˣÏBj&hb70!2R2cCR&Vc$ I@s"b, M ,Z=LULm+qWnFЪTA2X02BPQFZK-b!o ۈ _g`*gA J1C6?V%?is\s%u>}>& @i-RGJZ C}I㟸q. )BeQ &*ڸ>G; bL!7DA)'A*^ǥn?kz.| dX@ uŖ@c)PʝѠ2`!G4E0x Qd Cf[ 9y.~{mn]E=2ߜw~>KcB38NEF7Ã;! p븂^vvTC+_~2+N:\^SCޯW@c݆hY!TS2vr?‘m<;c* eeè(F. y^Gec"ܷwڕ@uߍ@jTG6 H2@f? NsQQ{6@{=L/(˙N#j'ɸ$m%[T4qEi<Q3&6(^6@pw3{kw5AfI'd}, <"1H׽@*|;Pvf(U^:Oes@,IS H7F6~@d{"?,o[6WGBn@87#(~uO3$deryjO C8<(!7[/=zDn%f!lNH2Tk"a6F v1i$zL . D0b03H׈EOIHJ3>G#DՅy-hJ3"C̼04cA‰3K,GqR)$Kfj좙$-QCD%=tY]+4 v0g>|Ysd>v1k lCs7=KԟA>lΥwy+Ѹ7?~#.ȥ?.]E6fdRk:h2Wd6x1^T=M<Ģ zYG6Qr j>J"m&v} y71}6?5oUǟ>n[? {M3'(Ab!6X!ܔLlBNH_Јj8o;+ylqބf uMWy3}zf$/?AW$oS X= ln"0*7@hcښ8_A%qŀ>=CRDN^bUbB qD-n,X%"o-ָMY~ܶܡU.GnV6DP93n˽"wwgǵbs)kQ]gph-e=]NZiv:ۍ%{%pQa!x X;DzliM&" ıL zL.&f-сj'BcIjoH0>!"xn_S˜jl TGAgzՒJzFTi;"Ή,1T)F;ƃ bQx܁ܪHST'ւj-'WӀLc惯tQۛqTdIZّg19b,ÎYM^^ڴ)1*2`KkB[WIHڢUrjgAyYRfNs_ZApEu* EM$gbT)Ȯj iJ>b E˩'\҄ psF 7!rofJIQ2e-(BOi (ǂQ2;k\:l>ɀGάg%K"1\M: ГCF|0`@p aR֊;DN"Sʉ㻵y>U`gF0 Zs b/q@4Z \xLnTCfqq4ŶA+JhZz@D`HMkR #thu"Q8 HR(V9֕D8ԛwu$vP lGUC5L-.QPC7l~O4"mrmya Ӛ ء76w 72RRy$JNYm7_g-! ?نx^d;FH.UjqtTx :nڂd+N0:o`1EWn 䛵Ȳ7 "a z+R8GE@xƐ!7: ǕͻiqΫ}j˪2ý(wAƑ}pTSjLm )iٸwFn1..8* 1`Geղ4oQ(t$RanO{ykYCLՀP8K衳eA6sf_~ps yڹTz6@u< N+m$ WJ.QkLb`: !s2x_iޫMotUmLo4y?mflەj[1-J'm3K'bH',{/3 V405kwDnv/r-Qvf^ B5Ad)`zeTg"T7zG Vcnҿs-E xƠ!7TʥzW! njZfk:@ SUDe*,X..9Yqr#v4̂5GsSZ4Pbb&"h7Jfe5mAtV) ;,V-!&8 I/ e6R)PZBk6TS gT1y#vRc4@d,c)U{â\N}p"c>zI*#$"PZfeKo!CR8qp$ Qg #`8H4Ii q!98BPUM? :i֟acF4i *;srUD_'2]XKJhV;J@EdhVJB|#7J 7B&*=(Qmfb\\ar {IĄ_FP_vS g6RTf6hrmb F/52cmdքIjcx6qKIl2p)=Gk؀OR8B=:20o܁&]a$%!aoذxק{6- lP-jo?T[S^hY9k q%eN tZhv;c~~{S0EO5ر>Ma8ǘ " oUcXUscU,J6OGCH9W!cvr#ퟗf174wÍ4~>P"JCtK*(P#GH*VDRe9C~,cQ忼^Q%/Cs{=:-DZO3 e>{^.gWDHpFQ)r+#M,y}d&de(V+}Uy^__Lz9ZDM_Ǹ>;*~ Z'jZ d„ -pid h}— _f+|e חyejI!8#J 6) e"3O:Jh*/j>`fZZI=T2I3d?7/~⻯~YL)q/2>('1OڲbFo>]ED z(H\ )xAhTH@"D E !^Z!CO y赵3\no*[A-\E*MrJӫV<̕jW]Iي\(Ldo*EqA FFZ0K۰cjeEsLC%/}rk\Lx"wa{׀7T䘼^wr(kXZEoR/nCZ⧕ PEM)'\M~Hސ{F-૭O>tYWi>^jʻDId6ca; ~51FbЯU{~/櫼R!584kŴE!Hzj- ᇍ% [nXUd] ;r5AkjMlBHb1WTB&@C3Xh0TC"4p8:0CP̐DJY=OPcQKA$j&-LjAH!8rApe ŐLJLF< 9eT 6Q 1 ӳTB0yz:¢I2HQ.Nʨu/(U#Z}@T(ˌ<efA(~u"i!!Rಖb i` `2Qy_ZA_q0P)I4IKp s&JcG~>+ZsΩ.B %C8ހl5 C%g&U3?4?q#+acY`/3{L_6dwȒVL߯e2EU]]]Ͻ L] 40v39uz:{ TY'#xjl.t ^Ch}eI2PLObA)0 }{0Qخo!71U`'K*[,m" TҡQ8F^lSʄhfLP9I0 X?M "^p<")p֛0iS00q6xnO <=0 $ CVˇ;NW%ѭ6Y:07>ڭCzX!Q_"A\ M׷٭>tF;ZgÕ"wW\N,/ KZbdu6 g2C@Q3Dv 4v`P:@N_o?;!s78X5_&ֽAM м#?4>r\FAv|;)z-[*5z2|ъ(d>-D4F`ş)n,Kާ 3O4r8})q^;O38#wiWsV 7GQ ;s1Dm6_oG<\ZZVM߮Qv`6Q=?Ef /%i$tsc溲c2zu|7 Eqy[|؞Y!̞HϬ‹UxDZw'+o]Qxyƀ!H ?= SCD~p#WK]w?<ҒidB`P?v3˷ϪC[x>˶L渋<}3a<0Lf h{'G3e X$ƪkۏ2yYM)vQi3ٰe4:(.$n'GYvpa|;o)"l~vuzŔq.P Guť/ z^4Ƃ| Tgbͤ@cG֢c#ϸwܣ( >EO痈lD}*1DsG!}>DIF|<&:DSzG /PT@32oDDGWqɧ3AűQ-Gawg.3B1?rNjÝz3Pw̩֚o!皿tdQVc|rJ,ߟv.{~MhI(mE+wbYGlU( %j ZhL-#7>p6omW4Wد]Qu(ǣ̳{Vۮw\H6tP[ltȃt؅A1}9_q"k[O]:Bt\. F 4pfv GR1ݎ%=b&^1l$}e22T;RqP M$RxSi2௡5)6;KTH}_q' Roxiц^n:r@uR"/mgΑvL< k պPm9v#R.. es#jBLb}VguȤHѲL>]="oo>E-ʭ7]bb:rd/K~Oԡ]vE l. rcU6higvNGmj`ր Œ@{h6pVW~r^иwY#>,LJ/ 5H@| 8*Wp MiM.5HdzhR'84%ϣ¡`D.6Mxߟn4OCAW>7 S3ѐl- ;IS0Pz~g<,bƛ4܄^XQS3[MҩKVd|$ܗdw(nB-*Ad"+]Y? ǽ| _z{?*pxˆʼnaMܨ΁ؿ$_2=r9,*^M|Ü88XT_¶YD(-Ey^s2+_fl!2_Ҟ" ":񯤱Gq_]|7|+3)jD ƦFi$½:e><8 Ji&4s{ @XűE V0g3rm-'fՂS`tW'eJ[@ZOU4Ư9C1ҡc+CY+LK rpRXlQZZ^Ry' vPdC?[-%nmJ Zv 8;!%)i"TdZFb! 1`fX. apzbr6)p̍h'$0. :d̲/kؽ>Ϲ }?wkL<\3}7Dk/xG޽xvW\`r#oz? r ͇Mn #^35w0;P7%"XA[TSA_ 9 b۬7YqSTE+ʒ91&5nPħ vT؉iJLTrḆW qsDtC&sݰ|B9Յr.+i}Pw&(&.]5D)RdYfH\'cD23f =6'twB'rmȶ}g%j٧Y(!n!hfv!Kw3Yhpч|e'+y^"7ף'O)'&|l|Wj*^-nnJ)YGџR|`-Tja~"70sJ9X Ȇ6Jfno&9F}: \Lo\*ql#~(~ڈ218lVAS0mh'eNEuJ8҈kL2$>J=lE 4Op\$rYm}rC TF۔cZyAQ8 ,xdl9YDb uH+n#Hѯh4|JӮ2_<@]˿Hs jP'=3 w&/q%O1m}[hN@,MYJ鋻"+9Kdaj[;)ǙcTL&)XM(JSRb;I9\A=E/}GuQ#NJ†`ug~ː^]a1aѓRcnP0%./uiDDu 񰱥e|g rd|"R{*44L{L޸ėݰ{[ZHNO|v2ª|Bu?{rGQ=UAGs623H8RJ2@+95)'j\Q%35 TЌcybDj{)W4ٳf#%e0R-X0)̫舅mGY}uL&;Q]T%ò# h]jɘ(7+o{0h7S; sl1X޻[ 7s?s{f<-Þ|Vsvn˞af"G9#ͮ[.;nܺmfKe*Nhubo:#kc"CTc."[a™(ΕFq%#㥀PL)aJiTJ*3Wp1!='Z?vdWt ߳B0Oo 8zeF,$%B^{V3^Y%rµB Yϼ`` QǭF3"EgiU( 9 .%$V3,!MJ߇H|ƞlRr>L$@KKљi`R_U$9cE5y\s-g)SDcDlM[us^֋,R٘ &贉hLkZ4 2\@B T7ጾ=[mzt Rbؐ?#B1_s!q~45TެJȉH 2: ==ikz˭E*6h' 4Xaām e]rHX F=MV/8##TA눂IGԚ-yy<R+R֖51#8vD}H-'e5A+_]iK[@ !3"e_9$yҟVUOICaÿF3u=Q9ŦA {}jRiIl0nUb,Ju( TNd"Hhe{e7&$ AHZ FG!]3MkknGE-ʸ_RĻ/dq$HZIvsj(٦ PXU8&׍F_/NjsF!t=ǣ"i~(cHwKG =νbe]jh.^-[g^]ZF@U4wSYԻK 4ĥOڞnju#AVB~ɦ~SD4ZO  7txMz'?b><ڹ#CގA(=BQfxR; 4R!s8 69H$`&/TЅF9$0RH P`4d Id@ PЧeAwH̃&b3d9JL;N&@'R 1F#χbFN 4͞1,䅛hMuMXK >w tBh[R"R{.OB^k]F w tBhθ:Ÿ3bX 7 îw$gnNhMN̻zzM4ȦV=Ĉe&Hgw&10⽬_zerh[`^~_@|Ο+Ηuep`$xarhEb ' l xCHt 2wKs- 櫚+&5TcKmG rDtR'=9b'Wzʏ~h݇Z,F'\=U y GQ:/AѢ{iLuXMt2DP]U𸎪a鹚at EQpS.R R:߄*OVG]=zxRC۾Y ・rFRGqB ĉҥEgbD6\}KPچC3ly Č[#vi[Y%ޚ,bq_) N!g%O|.=|1)d\(^dn9]qgh$84>) @w?wa*J,E^?|d~Mtl_yJjŸ譸g̅)Y b4IQ& #\Idg@x+jw>b;W >;&{ὗ_(.'G;B$q\n_iʇyJ{^ߍyKv6/o6{~ߔEG!i&I"_fa!%evE !Of`N9T\u9 =o՜ŪO =1teo4{i⤎~:G+AA.oij^i>D|\( MtL5l\ňUװF]`t40rSBBa~H h0&̉)0-0nJM.ww4Tөԉrf{iڻQ_qQIH5':_OMw< " џUaC:FHQ(WtL#~I'C@'wd=yw:wDŽMJFF˂Ih6C ! :A@?%X7z?*W?i3׋j{ߩkbz=cƏڨ]~:ub58ψe1zxm 3Ϲl4l_ZXшٞ=,,u eغ;Γ$) "6bB]Ɠ(j^G3Bd%Mv,ĵ|xW{5u.3=bVFW"EgeM( rweD<ˁ3c=# %M!hro})fm/oߗ%h(ldN ^yYy 7"]\gz!,';S#'k_c$հ}H)Օ$\&"k3X0Z BĀ,L(9@vB8z$%s)EaE7+YT3w vc mwrԘ?to^]ݫx_r+^m＀#rxiOVʞRWӍ)P3HhCM0ͅ!EQp<(/.5+L}c N~?7gw9j9֣?ȘenǶ2֦5̎G "h5(Y0楁̐R(.CZpǶ q.Xގ2[LJ=_m{0&~h]͚LdbsQdJ[$TLPB\v! A%溤v* XxKaLΨծ 3qTC9B HD TBXF4rn  ga,i5k@ HDcJr\0L%0M,, ZҨuNv9 fm ƕ9Z3#{] !Q5?ʔ)KkVJZĂ *]o\&ٍ;?nsdW)=]~jqc |x< n|5~WꊍuȢ۹~ȆKu?O}50bͯwUnwA:"%.?=z+E0Cvbd==4ZRr6߇!5M&[c' CԾ{JbW^ eqkt}۵u4C2;>c,q`ņ Pf )4բ P`."ƮҶ+ĉ%=  †ԅ%\Be  b5ɖ 8'fҭ= Y^b}D)Ӫn,Z8-nǶu3F}Nut; ]N9vZVcؙƟ;C! ͜}wG [Zʬ0@\j[nTTKb,꟫xOLf quUJlnG o@36sҢcMस/bQ @W>X%5^p%L0/NL0r&=C #(K =EB& J8IΖ?r41lg>3Io4 =n??Gj}'ps(BxUr8 5»[JP;L }gQ*N&Quw?/b*Kja)r)FaFQ"1̵R߈CB!c5loew3z{xj>gc1}oc52hbP.&Լt>ci:!E8>LXmS[t!.m?&ZFx=|m[cL0&hB}U@8ϧM9΢TV;LԀ*7\SccMp,MpNIA #ݎCD1V::Lܡ`1)<>uGI)ޥCA^#F|K|=x|f^|՟|Af*GU9^f3G5p8 8 /woյ4"|2gu<潳\Ɍ w%؈K=k+3מ<7}$ǽؼt|z;j6VwuJe7uHwyJ]~<h2yq} 1\ ݤc/ eg0!3P;E:^g?CKw˼p9hwX]ߔ7 @PUEKL K%݃6M%bڬ/u[/q2+'8Dh ڕ_N6'(AF5B]_6 L(WTYU wn jޫ+ۨQʘh+BBpQ0#H#4WJ~t0Pals9d˞a#T(_6HH_On:D__6CtBc?\) PwzU8F>v)>`>ݒp%vV})]3䌧Xf_}0qm3Χ{^^x=KR#ǏF9Px*|zs=5dXIOGpHN! R%%崶V;ٕ?uv*rsZ!7T^A+$.w>ws+}>voܮq7]haSRei P+vR*l-j.usz5|nʌ& u@/η5k?-6[]k<28<]>.K۞טlFM z[tHT;E;>ӜF=|dŌbf`fgn)2aop f>:n =u+w+l6:&Id1r[{ji^‡^/`@ݱjvX'xQ1bR`ϠiE 9#b¾-(Wt'-!K=ud6kmEAr]XŪ8a /{`uFX$9ߗݒVwFY``:@S&d?)/<U)W \pUBi]'sVrJ)L!SB ]_B@ח,V+d{XMhPzr煣t@lGkf&޼I`Ⱥ^YLfI1fLk#?ʡNj/C(B,E0ΐ.27".{{rȥQ1+Z!dH Dei"!bӢUfΔE|6ಖJ1 dsʫ5r{U_?HU_ᱸ|zFk?IcT(/ĸsyR\~f,)!ߥ_&_Ȣp_n/#fo3pf%~NiЌ#>}{A#Chf&ד٧Ɠll6P'Xk3\(.Hnt}Mc6UpA0$Ng C16(VX7G(1ZAl F2.b&eh8m13b4ڛtI*Mjp(aT[hw:MXA8r-IM ;1Ĉ Պ sYz킗!.awZYv;3r/o/dB JD$/.(7_fWRg:9dT4rr0dp+vx&L&_݇hZ@f 0!|#׈vx>N;V5UQvP~ Z5x{ˢW aGSiϢr;`MCz4zkoF9tȎf΢.+E6s"=Rϕ|a_Ȗ,T ɉwꭌ}6(4 _G,< g:WhPfxCeߌvkkoƫ W?;l;[/{5k֦t^D/iS5$5+~kc9VxC8^־~V˷Rؿ%.`Ýy[2ZB9;yyx8~ن8{ejsk\qGgv2^@=ٽ8|xsܪϸWI`iΚsagγˏ6Gp!za^wOAXzMdzW M>˜?<ß 3G< zotsAQ1 炖G>4YvPrԃ=ZP|H xhᢸ.-ܢ,vhcͿ=>l~?~y-~p9 t2:Kϓ2aao@[s)ȭu +A '1(X_Yrf홣{߁L/J<֕V tތp(xU7:y-{CE \\?ΝVQ^;eULiJ.YN\'Lat}J;%,90ZD Sd ^%]xAJ%x .40p޽m zBWp.4G+׷.c'2y(L>[#jڭ&nHݣp/ڱbGg`cAqFz /@hȴ[ ko {(zdagX= 0@ xa5y1xb>f݃N9-7=MuLmagdxD vqcWF 0oX]9Xlć^;R+@])O3oP9i=`ZO*hkk^D[ ;4ڧX{ʁStkStqz(D;O&1t v=A~lD]vܟ55}7?I.Q2C}r{C=#rw1Z$9i$)EMb,U rʋ.rοğ/oZ/yY!gZe *"hO eWʣ:i)?8nZ?g.(SyȵžœӿB|0k3O:  әʶzO+ƙ9]"/+%+hW qUIvLL~?M~]N`,K2O-M5JY>_-J&Q9mF 3dMOI杌RhZ" YDB ]jY'@̆[Oʆm% eۃ֑$ir "Ӧjni# X/߃<d { K Iff_->21S.JNYrbf!d%"jzŢ&H}'ٳé^ds =c X"*Ft\ i^e.I]NVh{yҕ =L( m9I M~;& J9׺GqCźh!S=VX@څTN%jĆ@PaV&X=.qvl8y9&w@#.>2&CՙZP .O)n*(} *9 GSds)S0“35a5i+q}N7z9aqUZЬ4=in ҾUf EL k$JԅT&xe?wrlFC<*.˺!2-q@`Aϡ`"-. @ơ;®ͥoE2<9-Be0#w"rX Fc7ߝ'R NjhY,@%ڴHi zoGchU6UdWsEyTH9sɇ}vM6r+e)faC] ]K S^O h@Cnɢcn߱Fhe xr (3r$:n FغM2fk97EV&XX&d:z\ V|vMNdžKn$Tw+dYD_<>3z2aZs۵Gd& ՟o;pp0 2ce?M>yeŸߥ͋ŅYofSE-38|¨PI3<ڴmKyĪVcU=]ϛ'ni4,J 5m+Z§ $mu,6F*}&ïߗ\@hUևLh;mPm-.3w]f$׵zr[9Α}Ui-R/_j_?] l:!𿅿'n9?M;?(3UJU*_&7E̦ ez yL'S?c9B_. CξuIo^Aݴ_jZ"w#eI}gvfٙݝg}PEJ3C)\\tzkolOOg߮V͒wlNgo60Bs຾^]yZo9[˭~8[/zQ|eYq{K f퍮ޟD,K9hr^:A rĀj,"@ px>VutN}:;dPv]e$;8y4S3sǩǡ@5G+Nb2)z#hM*ZIFyP1uε}yÎ9c;~xRsq8u $AA+P,!"Kcih (\N:x룀 n z+]" -  ?FҴiigSzgU/[F5ԂC!O!3I&iyC~WB1y pxT*10. #㌑3LݎU@ЁNdicKSLuK"T8$nCd$ _Y<}zw<83tQE qKXr;tFM 9 WdԄjBxH>zvp>JX$^jhUi-LHhX 9h8X7QP=>F^ Z5ZB94"('V*Й)aTs-3{w A NX.T`:$'>F$s'UYu{i喋Hxt:J#l{5 =w u4ԥS&yU5**nK׻ptWS ˪!؀AE!4]ZLB1RCFӛ!)҂-Ly"15Q,bT ƃMv!<-LynΧP9mȶA&X e$|V J-Lyrj[T3ޘЄ:& O- 7sg=wooe퉘#Zn#uuF},M m4VWtC~.Sѧ6K=Nk͏n<t뼙 eܱ|_ml=9 ˏh; hp+Ʒ^*tCl@g$ݧK ܧIQsee\鎲-y60D<*go9|X4>!f ^UZ۽'?zKE#*ϕ#7|4hN€ɻ+tSt?ҙ?hnrRrei2}z=FɎ?#gYޫLs",2&&8MtypsoO04ma:ڢTn~\TU>ϻGT[C#jF4^ghHTbCDyQa27m1О[4I1uD7[(nޒrn: }{.vS S<"=nN,ǛM2GFI;b-ԡqYxDZK RWC: q8/OK(ȧ;ʧDbP41#kLB&W$w.-rs@-wS,E1/2 Pq:]Jbzj&NR,0$#جhh$ρ0p d`(Y FVSVW4MJ&)[(: h 9Plwl0VJsB8RQ\RRJt*QFHMy-(:H<󉔭}<^.B&($U*b-@6ѐR[qԪ: sCBշ uȑ[R6832R~NX#J:S+4 PD#bfy"FD>n\Rʍ9C2l|(TTmߊ{wfa=y-jh}c3 tl¡{yF;="+Ď--h.8Pr؞Z۬*C!wԳ&*@#4$ \+ ]DLQhJgQEw"餪2H%X4.㦘{vB "*q߂M&P]sK7,M[y,DB@gA FvT pJR6%6|nG@ q8: X*c6(J) ieRDINK R4I Tɨ̸TSV2.% 1Hbwy/xe1qFEa-V5 tdaDMtyѤ|yYQ}f/gA{v`&F&S ssbk>RM`eKvCvQvgէ`r\1Ltqgƺ.k1u{>c]_JѺ~zj7xTr&)U{^Aڣi]>7TyѺGl`߻UwN?r4Õؙpq [0솈] bLAۢZ?E Z/=q1o4%˧^oּ{Mw經QUhHTpr}m0m]`dz^hfo=/)7UK&gI~iݸ9T9_P8$:夈%8fHcA먭Cnz7HV):>XQj]|E{av_.ލ WAcpo䟿橝y |xo芗S#Z7k^[3<'$meeQ$O"_ܝڲݤٿ|Û^vuuvq[|hNlL`AwQոeev =s~y^6$N} Qrw?{Oȑz;ȣ?m05lL=,"ejo$KYl/qUda={ մ3Fl3op1K?׬:1C6;NV]7vQ 88%_ޒ?ߞ}W,Cg9|qǫnCM}& x3p?|vcdĤTs~~6F$1By;Ym 4l5({n; ֭ ebI"#X=^q?ź%"d!xy=Pj 0$j )+A#J&Dž#JN: ynF97NJCA rR'jB%|AhrGxz5caP;ORxF(`+^ϋsDiӖZ4\hPs멮FPw@ju`yK6@u`(>PeH=-Q GN yJuD 䅝SX uH4 4ݵ_Lj+ډ<8PDAuAqq$  o7Er% 炡SY|ESEIv}gjᝒ;Pfǫ+1hwf! t<~}v>~== _Y4%j_W3Θg̫j .Dܫ*SUr^zNDM8bM?0 rZh쐫=P#JކW,gWUX%U?!mrW2zCD4my۳݉~ǝ?i{2|e"E]ie\y"Rsb)0x 7B;y exTFN=G@\޲Q0s;pe[|sOOA;[&~#`}Dt˥Oat2DӭmI]y@.dž葧y"=;-1lCBF>InY E6/ezAaOMmd:[͜$ 0]N EFkU[=ȉڨOH]"`D[RI^뭤:2jV[#H5|"Ep8v~nP"zH EB46!+~kt gmc$A0;">vN$eT(+]dxqPqt5u)PN1gK(g cxGv|K`C)Z jKG}* y22v)oѲD{>lt$Fy]*Z@2Of;M2@ˀaRzЪFJ#J[LnDH)ɘZܭ7 -@ұ"C"W#*ձl^6FS.: 멵51(6/")>Nۤ`/ zTRVDvlJM?Zw7˞"=[kK?аVסiquY47`t hcS@sKg HK4 OddeQi,N RJ^Xi})~t^ PH@fvPHF xJ5⻕Z#pHeun s?ϖ~q~gA/'Hο8xgMQ A!o AnH wNhT)A@sJn .9:ZIbX [$`5,'#PHKp@ƩgtdlHcG7QϤXzK3."Ι&# /eR@odFdGoBǃ6Gh1b"fi[ 8WDoT))绪8ЅLb-.k+{u<%{v/T<ï:8.v?BaOMD?o߼i%[/M!HhMjs.kOs#P>0-BQC?w7%zG *kiԆxFO'G3p0L02X @jE`,o1[B~nN3mPm$ֺa_B|Zui+֐Rڴb0p!dpXmcbyf~m-,kjIՆJX'Hrad[3Pl+Elk-0^Uy`m"Wf}qqࢼ2 8jH81%2886,IA UT35FáF$؁ *dO^ $l5 B]'̟} v$Xa=cf;nmL?u\sW3ɯKUUݶ@TxmNo)B2ʥpWJ">qeQEP DgyYnD8o T( Il=fۛMfj4JqIO=E؛?fm`֚uJfP=@囖7i;lI ÝXf &Q*Mt/zz/@-rs/2#J-PЉEsT(*_%'Л v|1NPJI%H@Ѝ q*Qp. ߏ| ] /asDK#+x9o,*R<*(ݏ| ] fxK5pE)EJQ !. >S{XrBT{U*ሪtJM;c#_rya:7# J&]5rXύI\e^%ր&Jk F+`"*+⸍\ &B-$%ZUȉBDJU.QДXf2p%\#D1@"ѢH\g#R8v%9!&P+vD I(+kO(6H. ^DO6hP+gbC^ 0;s\MyOZEU ꓶHA&Xd&i!J+Q͚IFNx-dL, /uz" ecTA_s>0j^8:?>6J{VpYxLu |j5:{qS-2%lJQW$EX|e- ډBWS [&^8ȇPBˆnO*5!iTOD [\9M|2K"8(|2ը RK*1jWkSЎ&.5Lh@/jV1\W!ތcÃ\j/]",bkFVSH>GtoJdz_1XZ`kMZ!5]_m%~)Ӟ6XeyIw|\0jpՆp]ZIpWVpi N qTZzAj' m_U+t;>ɹ}MͰ&u)tk +L6Gn:Z~r~L%A'tTm**I^ L0`OYO_=D.B&ݓcNtrϮ&C?RصAԮ՗K"$W_=DYP#6R?l=ֹ7P0Z7vb/W3% ;ο-Pu<+r@=_z0Z4?C|qWCs`%w}>DWeѐ0'ÖfVL9S%S4ShɒQ` :עK4LYi~"AqZV\p_,ݰNYHnhq;$xm'?}s™G,>t14S]1B&߅J?ԝ#7y_*Dtn 7,ͦgSn#>o^/doNj*N>2p`fl'efv!0]DS+9d9]K?U>&atY =IƠmCְPŠcgWcvΦcȗUvN-X3AiF5mkknFE{6T4SMp5IK@(/G3߷AI6%K6(Q*̅&5 넯׹'lK.ܼpM濵ho|0+s$ү>G1x%9MV;g[G =KgO{dp%gɎ2Y4}SΠTŚy@=/}MJ<ˣ`i1[ʉNd&JF#q4ZH҇<4_~qN ]aR˧ DZG_lS1L8L6,~]ѣ/a:nk~衵#[dF+de#%V%R NIEx^Rz,)ЏR!Q0V˜F}zR_?ozv$ڔ.Vkr e ֜7 >e_O/9/ޖ ڢNy-Vw۷\'#g.׫džpTŬ~WoU]^ a h󇯆j<͗p}wW7A3Oi eo qF5`bgoOX$32@`}" X9,xww}m{jB>:|Rࣝo_Rﶗ* H7W0h<*qGSRdv<<8Q++4%ࡒ%W <Rް&S="Li3wq1#Z}̶x2We\0l! ^YShB!P]i3ƕƥ)4n|_#\yKqd_^al+X ouq=:?5ƭ6?7Yמ\A|;vOpo sܩ}ocsy4~fYĮ_ڍ]49ǩqeO @',l-!ҀӱL2pT4]vFMŝ%)0XU W4?̸4挭LYj,?ܥ Rnj15x29.U`F7.Vw'w\5?v5όv</S-AgX]-$$u,bOwaXq\N8crb0e\G ANo㰾ǽtq~ok4هy۪3MMU .`{^l7hOYcKoo8̈i`da o9Kl۪dx[4)ZF=<N(B *8Vڻ J (27E৤r.ToNQaD%Gj$uqIit]3v:v*5%&;Y<*+>7B [p5Қ&?B1AZ@.*Oƾ7W\Q{kN;=sn?\CrK`2pj04zh-A[Fp3~J8pX@pmr]mj H6 V{UU>M2Jի-lY)^ˢ3z{ '~/W_d\Bg$Jpqiv̀ʙB*'Kdg`\uT~۶U m۱'or܂q P:a !$LQ'͖NЂzۊ2Zmז:]u=7VSeNA@sҙg}InyT!OKl8c96:48N[Jɑ m>5.CKF&"IG0\%/3]0%Qy`#z"}h!֯CSlrM M;8E'SAUQ WcH.FY<̄O I,'&ː]5pk{x ĵj,U#R~NisܲtSYjnX3(^l-֔i~AAjS(~?\|Nl]6LY_bٯ?o¶8$P`QߴOXX2 򕮽rm9H=07qrQ@˭Nb#ܑן?RWD ^P課eAI#XRX#J43-mIBrK$٠IehIx&B~S#s\|G,X:g)+UP䄞Hy| *Fnr ѓQ€\yik &\tt^zB>_Sb g-pbNFʜ{ 6ŖS $!XL x"P0YLג4XI@(yxtIXa$k* (dU4Xz+x$ 9J*L5_}H/u>OmdZO#,L]6\>=Xe3V߾]gQ $a]˯W kF̢3WPX+>$Omh1:;,&z[.ʉO?!Ҷ nXؾNxN,:yFmM xQ\W6 wv0cW< r8c]dNT_g&AY_Y1g#*}y eoS'6F;AC6bRS?\LQ8C'צ@C{)4~;Zkye_{}C*hdon / ʲG~-}M^M7E$ILpʛ SjᵬlMLx b |_f`)Z۠ bl ,wE(^׼`!4>Z}mLK>@W_ HIY5:ce#*@6d8&icaYUƬciRWZXUob`KDj% %Hkw8UA3ś:Bgo$*FRo+oxJK Ex.*zSvbESw'_Ͷ8wn?$H[I"XS?ǿ9AWɴ)6o hG_ΒiGOܬ# hKDRlw'=up?ߝfԘ[{G:ѓT [*'*KpR,+^g: ۾ O~R|}eҐ~+BKw]ZC !=_8E^Ho7-t(Re/ߢA&it)oXr\6$ obFeNQ_n?xFH 9d.h)ۉg]ŝ?a,8*NMX0n:.WAe@$6|A'Ѿ֠9k/=Hhx,]sPK*e0aV AS#oۡmJf"uz:0 N㽯+eua_D;2Ԁ9O[y[yyIÀ.v9υ}p:39{fSr^Q׉W F4PQR iȱ7f%=pN+ δv*F fYHJR .V6JGLx+Vw"y5T{e[o++J%B&2_I#X7$=gFɌ8/RJ@sf=|\0%*t+j;(c+N x'yH[ ƈJBi퉼^s~#pʍ0'uF'WDr392<-0_'=¿F6^oW';98O &G+trl//ާp '^Ƿ݋2N" kWNK zUof>M1[;՜NmR๏bu^f_(å~ANrw,h_)9# blJYU8Upw %wO5`$w܇ Y͌v)v{4dnImX.S,|!E+ loC>jWqrkyb{}ۼn社}tn߆` vu=d xBgcλ-Igc|M6-!>0wu/\Gu䏷e}%H^&buyjq )0ĬU/d[ū@}R Fѥ vbOٝij('2O|wN8(x[*5y,r7聩C=8/{.aX*KDRs0O0I "6LR7 b&J>s8LDєqP:Z&n>MG 9ɞ=_[{|ݵn3dCT-w .F<ϙd P`D5Uc#CA>B?A 4OrգO. 15%<5ORjJctC)_etzR9)JxyY5PӢh(/4&y* Az P~*$)RY^Uh}ZJ^F\TF.HSFpzq*|?zeWJtz܆ _*/ӻ? qj?> ۾U7b<3%n,]^7;x{1ƩP 9 _`ȌfHqjq7Vgg24%9{} |N~wف~b8mu> L#/}R練R4LΏKyD2Gy~xF8AOjT0>,͏+cp`3bkn1S$:/ p> :o{Rɗ. '>0>H<68"%|oIVQբKFWwv(eƠ3{W(R f#ހHo÷xE{եQ^vaDIx(T%Y_tIVZ'"n+7FI`SvӮԇY| cz鿃ygRb{vUYoVB%؇\,߿{O8?ZW65FGb0Իt?/ѝ[M)TA*Y}J}MP嚑JBTapj6Eg s_pm1>ǀ_G|탖@?YKlNi1"A9wη=x.9 0l I#?rdrՃ*$B$O=U`t7^#竞:BRz + }lDh5RQ۹enOz6/ ._jٿ]PjA:;j @un|]F~ass+[^ |`8.\9еep)R,En|Ȧ y 1VH˿Vط*)px{M8ah; }zw-]`wzؙЊ( Όǥrl}nL =.!ǥNHqcj .!AI "}Kr0 XCoCH{}~9㡰Fi^j78%1$xZ6AfS P5}տ٢{w?Nh'4"[V8zȨkD߭I'އA0TpҝzOBxYsԩE4`u^'Q^ ce&ptJyQ~c$=0bm[wǀ"sUNҙ 6=27Pgjm߱vnňD-AT؞߉g>Zڡ>J M ݎL.gxbO`# cyjɈäӠ`8h3@ c'th."<Q;Zga+W/kr;Nʭ)$`y& E(ujCY0ijK{{ G)'V]: bhdщ|BvLr;Jyr4˒|/+2ZR. E$7r3]>Z:7?[IA0u:ly=.&'JEn&i"*M/U `2Nw`78NȄ ht ikv~Im1 h<"ƥ^Ma y)+[c<u/(qf X K.P t(5m VL*nw(uFݜ:P 0wNfoC43isx_L)u: @;o'3a4ڕЂJ`OkLxq/ݪKmJ<8"㨞9?I`nq±sa8!DjKV*y.@$8_7X ռ1=NdVFS1*CN#Ӓ?,Kq#+U6))9n5<M]/qᶏ^L)N!3=+nM,Dgf-vemň,A'N>YڡE"nD)ۣiY:6- Cx&)Fm*/4@FgU% mtS'E ps7Ki\rm˵.}OWKTeⶶX47hw^5-GM1~j]SZ`L+gt׮ydL):ЂI](P1t>8'{nƦxNn=4?_4y718H5t}#^97 Z) BQu'kCv[)*7xI.'4. |&10>>kg`y4L5;S?09*r`K62l$8͞DIQz`V!NWM6sH1R^MzI2J&1L=?uc-Ρ$5/9ș? @iL7%+h$3YxG_Ҍr[5*.4Պ )a.)v$M:me-B Qp@tmqANioGNƟ*fD37QŔxHL R '^Bh]bV!5.g2`)V^q saڶ`='#Egb/)'+gs>s|n ])p|U`x A'0j9jeƘ.oYQ׸.U%dm!zeBN/2EK#2FȘ$R9wT91+љy}zGp v$]W[10DwJ uF!̫t˼D5)t{0!y%i6UpZ%NutQ҅}P0\$W֨u ]ԉ2껏K&i.O7eY6nK5h&>6. C$9C30* 4Yq[щ|lKI5ۛjU|d\kCFksWl R}.uK2opME=4M?_w71\/ci.=嶥>]XK@RXFaa9)*^qKKEr>* ($|*IO:).➼q#imWr]o\[Zѽ[zq Ie a01'^CϹIϔL8xs1T7gI'D6Rٜ nM8PG#© $ZӜD9=9*)&|\?2ΜS>r9y씾|N0&B؀241P`Hx:=5jr~`ݧV'.S+ Rg>8kc=_2O @r7(5;Vjݓ#ΏoQLCR_|<ѿYG5ߔwMa;'-MYqY jgs7ֵu[m][uu[7%ԦR%eR 9Mk%-haunʼRl7;ޓiNL^G\:*kh,i|;mMqw>@k7f_8q3 ~Aٻ&q$WzC4ñ0ktLDwawL8x.uTcIPGQ%"AAe.(0_" ?T\2qywϠpTPXX#$f)X!hCJb D^TmD]bX#D5.]d#bxzdl>-f~hy᜽]27/Pܳzz=-T u7"ҜBܾ]>TE8G΋&L{? hn<3/?@huJsUo $Y=!T} Vq 8? Ʌ_ "%YmU¶Fm2?eVj+eK"6v`\@a: mܻ&:I(>Y-lۇM ZfE{ZMjqu_z > rPh~y<]>T_˶~gƓ'x4iS8}NJYl۲e0JUYƩ 3MdIy/aLZq1U8;Z R<8A9:Hlؙ&pK#~T eHW9; #*% j:t9wGAVPW!DHި;tY:IÌ ŠXkvyo5#Ѓ`cG~گtknŖMϟnPx83~y={jV41#Фe`mYMNӺ`=+xa|/ʱBA%ӛ={Ww8 țNmOTeŧw 5_Ʃ)\?hSszza-a}]!r֫&is3 իָ_x%^U{LYJUR#/[UO57 g FusBJgJ:_ U%6'Jټ*P4fyՔdSü)#LK|m_K׷oOQ~1R{$l51"<[@t hCqO_ ZOݺ&ߤ5WK5ӽ_[`I`Z߼Y |yjNR!tf2 d-rM/B7&E2ZIPtHX<Kg/8zS?!t܃+uXf+gxy]%ŧʥ ҿ8fĿ.Sbt 2GU>B#c2:FJTi(&J.﫼!!;vUr8wl)$#'7Ero:N,|6'rӉO'Dqkg YX}?"ҤeY1(Q*L2b랱=~א]`sܯ 2{J qіPp0TvԒ0(|w$KI}'/uvRUR1|ѵU\viǨ4,jrJ!DL h*3% 6MrBSM^#suB+H,E$Y* R$+#i.1N)h.˓Thj&b S kNUnE덊Qɞ2>-fAw̾?򠩂ۅ~>n Q&%oCZIټh<^fXAL.s+կ!vR\¤TVz\]! g+Rh 2RN^Xs1N0j,l I2A-$:hlyŋ: j^U]iG]HpK0><V$G`e>۞|-zńHφp/DS UF{B=ڧQ5hg Txn! Ý[NUw5?8_|sOSII]y&O;Qƥ;``H]'P+לGHn4+tc!=bhMd V6*1X%$`K) 8#B""UHu>srtpNTJ2ctAi[Dhn}:n\aV)}sa)!!8*GUk˾C`.8\J *G<$42R)dyT\ĩXaq,${gjTRfT{w3 IIp,Rӌp)23kJ  *ŖVK$@;CɇxUi ]V(WڳkG{vK ٥,3&, ٲ=h hLҬ"%r=[ ^+S`"̝lc)KV4B )v\=vtXB-h+f8Sjfx$kI*e{O=8[Uնóbl"Ss6ƳW(t|d(O` rL2ʈ ~]jwVr:4ri=;@R#'m=GakQ^kU#s;(,89Ev~NkzVM$xW';z䎊mo^ZQd{^#!^Z %vT®\2֎([) ۸54$ͱ8ˑJ IR̥{5cӘ4T+S,I-k#"d?΂N7V忛({ @7 [DZLY@9OR 9OHϩrPJ5B(աsŌjtO3AtNS3yΐN0iNd O*G2$yRd ,\9}YeTzur!9I(zQp2™J q2)LL3%`ʷ$ D$":*Mbs.:kj;D $s^qD,'Ӧd(\v)G#۞"TQ~ Zi!kFX!sRWQQ9ؤډcMlrl _ȵfhYl` v˜46INW%i}f۫Ls8_vۢ SO7̯2fo_ cS1?L@>/aZ(V)Բ\Ŧ&.lJ W\YX4R{6הeySVqM) {FY7D*3C*I}GuV[z0`uCC޸&ǹg( Re:n]"(5떞hА7gw^#ߋX/ ;Ǽ:Xq6gwpp"fjyw{uY=%e/Cgr%i)vsb愀kW%74F+Gm0/"Ew7`ƒ):>=?̃oѪq 1}|&A)ob"q힋m)$+T3aa^‹]D@E/74p tD QA%bmD@$yźݾ8R^ >! I-qr,6sl3ߒGw0։lP1M FK3A5+{)Jmgύh1Ĥ=r8φx+Bà-F%g1oMW֭x0!ʜ*{6ECr([h:W1v*Vݠ&>$rO$ uC8 %X"AyR M&'w=Oo5rr0ܚ굆ױtIx hI04!ɀ !M&((8~Lrʀ98<y @@wtM"B:cأd77](Šlen!> ,Pߢb &?C Bp e}@D""*!&t]u:ސ=6 ЩPB7D9T0c/~hΓIiNi9b 8:'yuۉz sQ3E= ^]2 g=Pif Z)A\R5VpO;1}RDN6J:`503m% f>Νٓb`#J#+<-Jo:"1z^RS*%uH,Yٞ'4BmJ;:rc *6xnnryiPuadYQdE nh(XȁwTg0$2Ť5Eh8߰AP`u?z}|>T{vۂ4؅vlj)ӍWk |@-wߔ_cÂ}nvQ|伡Ų;Lǔ tلq) t8V0)g疤OvY;IC޸>M{nլ[ BT'1֭EL5QޚuKnuhW$t X7(uKAꤾ#ƺwV/n֭ y*Z)|w}H^7nЩI݀'$%78(׷AIRg!fpn{4JscH̅`(E6I7 2+"V)0Jkk X9_A{xM%ϗJYVwZ :ӶR'tcwہ{~ׂ[Dzv|f]p9" @WthH'+e`d/w ˡѠH%(5 JPcz%ԎW!o4}6U,\%fC9+3{󻶅ѓ~Ŏ\f(}U1m'g5jZglmf`a1&8hǟcKJ߅l~k;&WAǟʳNRO%]?ndus2Wgg> f 8uiHwB%)1fϯLU,tKJګ[so2,znp(UC85D4fx08VJB^[ 2{V59΃xxJ9<ض=70Em*PPV*- Ѿn(h( X^7l]ʲݽE7 1l Z9c>/py ZW.A벰q]6nC'$AwٰRt3 È5b(/gM&*1v9Y;sr_hcYuD7~gZԛ|#@Ny?M6]&#@Jo庽Z @׃";M%* 1!R3.Psn=t/W.]Vq\6a][s+,d'pk\\͜:'9/Rc3$I9ɞ:}$qKk4F7 73P*E(!-⟗VEb9 gG J#>>9Yw n<[^ci#߁^6b~Vtn^n˳{vGm{{@-M]N;!$QaR * &T&FRHh@"ʴ*aDƒh&•E VQ:'ٝjA-P!r2| rR)^`8$\ 4DeaLX(1U1$)r#E,+AVME;ۼ#eF0F:4Q1CcXN@sm/2acLFIG(#!TPPFemjU,)Ftfm[YU+.Bde]6R# 0Om-kc$ [pSج6ꑊ⡰][Nj)^:IFDo6jwMqxEeYI9A9nԁ5?:lӆ,L. +~mlȮX,mu0h=2_=B9=Bh9?^3Fߥ|ސnoVȝآ(Da) 4g["gT w|꣘Rbn1CcxswMfjӀC wC뼼Tjhp':Aʟnr2!J W= ZGH@]3R9g6XLs]Y Lx 5Q!:un÷w TT/ PQ {%\˕aݡ׊2*2fI2b|oɏj#wNY'=O1f I|ݻ^r ;|IB_mՒfSG8qev!ݜȏٶqrrys\DYuZt3oylM^ε-{U Ǜ8A 12-CJe\պ7Ŕ^}Kml~NB>OzZ YS'ʴ'~Y& UGJ̺q{a|e:cTnz_SZnh֭ UGb8[7ԡu󕁋Qźu,W#i{Ѻա!_sL gh"}9n\V{So{typdcGS`y=כ푂f"F֖C/^ DࡀWCc15S{0;*|E!+D`P~(B=ߢ_Y"Fdi*t*7"iÁUIsC=/$!"Aش<&qxUr|,!a`tBBOBN ߒf:3@m>a1k k>Q MJ{+^J;G> a|o^f.E6XY> UqW*ʵղWtP.ied-IbUB[[!IFpVOzg M"[8Ie=UW[9=5Fr7$o/O*U g$S! DA!~ޡ XXC2CM8I!N r$h, BxHg*pIyT{ ۝ijAr}R7-t"@\C.ܰ=Li)VKecTO TKx⃲ZjZ +zZ頥vc3xcqnoZzijb\AK/ZK6 |zpRF}m8H*U8{FfK:v1ucs$"j誆ocJ,(ƊS?L=w,*4-kq3٬#=cc}b0ʾN}:`Z ?Q䜠DV^ʕ H(uѴb/?G84>\׿\ kCn nM鵾 /A1:1ƦÇ"XQL0*UDAګ1vS\c,{u@ɨX5D\ʾђ$-B7)'\>u{uvC ӑNX#mt¬ TܠrbZGR.>]'t_As닮8g@1 KUuqQo\/B *V)ZYJTyKg9tLQF5OFe[5*/ctJ@OkѴFK3]ŽD%UcRa^RF&"B4h*A!E)f(5ATLa0 !3V.%Qڮ2hNڥ}aKH7, T1\dxo\6߽Ӷ7 گc}itrmczh{:Z앆Gn9IL*ؚ)NdwF{VNG,@"hTLq<~X✇U_x+Q̫]C4Ng9LL3,^P(u@s!P`%@I7MoM7g9682'O|y#ӴsU"{uQ~ mpz:+V3 Pm"Q]9l)#a+:TP$۝g {VG`'% PH^Co9~(E:j0 ދdQw-߯8{g]mԻ !á40s͠4H$D=Gf+Ed̃zvV9 ն(TAp.ql۰*)CBR321c8&AF`GB­~=.nm}׭6L/銷yz?fNM=b6HǷO/!ZS2}I`Jzӏ~Y|0 Kߐ3f-W <Fe߱E%onF u| F8̓M#S$۸}RJAF??ߛwY'Jx>1H" Ҭi:񖽔#ez &7Ud="919FH5+Fcr 8ϐ:vAK}TsTEKsWR@KS=-RDvns*ZJTc*6חTi)MF\9.*7-M&a7-exPq)1PIydt/(!cfkX"$R+q#+Al%ܐf CpRG80#p, 1:X'1bu>B*r.X"QrY(bH4DYS) h(H5`]\5.19R4]SS%A\S}TOTcؠi)IwT^h)AnZj6'l-থ,ބ^h1EzҋRt'+>s\S1/`-rY&¿tc,_\7$#[ ZˇĠ8( I)90vXsly08uWy<-Wd(SO?F8Z .ve!?_HZr5L6&.B#(r"<cJ]2P$C,Džok0J1g3`LtWuqQӐ xHrI[ZS9‹_8lQ!QSj8 kY0aJ[4,\"[MYU<]k>,L`%|j`ꂋIAs^\UBaJoi]LiKOsW*eԕJ)G~k'p5r[lҴmHwULLe/s`;q0bhƜ") "B4&>l{(r"ϰ'_,dqK³'/:XyF{`qٸ:ٷH ;|v2}qѻ}eo,-?'}S4!?:lmf"McVǒp[cpnR+BOXWm,C2:G˰v/H ̯֟>x C oat5)|sӞRQ(  ns+ml!o7( hFYEV#o|.l^=W&EN7M6?Y`\}&X}GưK cP|X14ƺ=@fnf~EF)=n!BF属ݚb?KbOZk;ٮ~}:3ԓ(0[ayn ZEHzx14/䕢 m6} Uw_؅Ch4*P+ˬ:Zj,1hpM(aeXٵj 0 ~%"38')˵Cϻ0JV;/}'-N9l~^<P}ν<d/_f^FO[6ߢ^y8nxԖG=/-IϽ)"jHӟ.I0!EiyS]Y&C?~-T)b ?XN4>s STD C+z r9VPwqI ׭ϣ8-ƪeRǫ@$݅zr8E에`G1/t}Xd$SQp]՘>)0CAPWTFIPVZшzM_8D@KM}_<(CSUvCKߒ,+^K/RKH HuKqd*388֩NU "64EIQl ęFfH`&٭"6N-`KЏ[7 :"aw +RߕvG{-h-鞋Y1qZMYLtz=34 U_? ~|GpJZ\qͣ @$&K{z+flPRA 9DCg@ʛo8cXm|n\,Q?L(2Ot4hOj1Wbd5RxeQ[Fm?+@# V R6Do 5Nr@WcO6b DĨR&]xr%9ݬu`W/ [;O6$NYj2a%)BX')_ATgf)HD1Řh95 vІpuW*J/%$R%FSZc3N3\ IBٿ1ſ`#v0T<Nbus"U''xT] TfZi1a)2I ilT$3BBJHQK,3v&hwT!TE%w0 FI bLJRMhLn8FdPƕx`Ev0O !INZ\ PX\Vy˗QL  L9PH..a4I(Il -**%KQsz/^F!1@M\ of~ \o=_RKABd,V~w!A%RB}Fn~k^(vBv|Q^*1X QisB=$lNkݱjy{zؙ4Ud`t鲑{9y4zxxN{*" &ni_:nb-3bX 'iUy]DxP?rX,gΗb\?0/[Y`f M"LLO3BPF|e\`BmvjS՘pySm͗ZqFչ g<3w,fΖڛ\\Q@yl< gF'gh|s}]yϴ|dr< "PhA: 2aϥdބmi2`r|0rxvg'c¯o@Ύ5yw]cg ̂/z"]GP&wkOďE`33z.+\.@J.~:|/F/:J}[ f}O~1bzǛHRtATUJcPYwpk}hf3Aen}!,%^9"@E UhW:e 8a9 M|\Aj$x+~+ȩ9UcJROI$\jYsg?Igݼ\?8peyjW۫/zv=O_;ּ֪O:O #%عdi˷i]K$Keiq$pLNtc6njCZuDSC]Jd!\eTDyJj]ޞP`{XِK5LyizNx8rIdc۽wh[ KWwou|}Ώl4xs>aӼay= 7!sMzbz2c] YW]VsS BTr{4].L#WM\u>ʭS4bh&_*˔{M1)GEUZř eCS5cJ>SR]_``0΅(A)mY!6'`o7V(8ooq/.|0=_K!Eo|KLY֔SΚj@[&a2 2:Vd2fDj$U eo2['WL $7-h%TY؟" Q7:W9Zp{hW;o%x$&P]] $;A`PﵫqVp8ܹ?\ S#/Fǣļ06U (S{N ~C2=XHRT!#㺊҆Tljdk2~n_(ⶡR")ZLZfr }H,R>I9es#E]']8Br!Z/ F(h~9y||KԜ,c=Cugv(~ ! Bs҉$r\P)UgHFdZ,44%XbP"q8xByi aC\/ (۾thhQCIEbIf"bXhJ3h'4D(b0Rlj#H=aE菹hSȸ 5BSK RBecI1՘ Zh,x[X#E8.?Q:F-{0juĐGTQu~iCuS)HTifKnRX'Ӹv-BU7#7}Y4JP*%zoeRE|ps_o`z̒x9%UcE9&TeA@b;#da$)]3Iy3]t ޗ* 9ܐ3wؿ#[؞b jvW-r~{<=^<^ˡ"l )cpQ.<(p-~mkCTZldu﾿kTm`V'[9a#>Ė3͟nWO~1jJ`T0EdrY~u?#‚ԟV*,Y*Pie(\V K D@޾nZz PYwUX|.뺨"5`ݳvHLE"/@ }|yԒxF2 _Ύ2GD\5$'JgϝY06U^NϳcnUzjW۫/zv=O_;֭˃d$1#h.|FvI9dOvum-doJ?5z>uWʭ'_f.Yr}.˝ٻƍ$W=Ě~c'][%9rfW-n")[E0dSOWwWUWnw˷gsKZ?,w9/ּ)#.H9l6Xoh:{?yzydԲSjxŪ`:޷g4XFv+5K ZH9Ǭ,6浰?kZ3gC A͍؉𱄏YpO?!Pg׉ZDq ojr$xubh3,X]ЏVȯcnyH@ XA}#" O=5CN+ Z3QLra֢up}^$zo+rT9g٧kְ^3Y12ɱ)T|wɕDmuCsL,ն=x[Po>dwQeKBL1Ʃ\UK s Ƹ)8&*ߟHx,!9mAIx,ihP9? SyhŔCx^d*yZFjHIE;Pssv, vKK_^^օ%iM%Ze[)v|;x *QQNJ1lLs`$1Zy]*!T\v.7h p9I1h t&Jt<*dgs:ݡUY.ldcmpUR*C޽D4"hUQ@q-x!Zn Em/39@Y_}>x˱jQ: TldbuSm:b^Yc/t:gR-P;Q:VkɦxHej0(G~q>%O3!)fJ&MS6+RR[$.Sf2'-J,aUY\i= r9d1gdE̓hKufKJE%bxQXRH➱GɤHgg4;2QUs򿳟]py| ٿf ZnWӯHJR߭b?& S/hd)42dB Bd@%E"Ck*AL#Ι}ia4:|ш@iF4R9C2fJDY *ǷrCG[Ni-h+9ךkc8ǰz|+# Ɣ\IxX=& }z`D='zm|җd>\&ex޲-UUTL0p 66hb/#h=mo3I#@|b#@.&:22ss!SbCMƤ[ Mxe?); N9qH/<CJ9|I@1-)k $:@ \*Sߙ"fz63|[I/K=:Pq-XHUdu"|0@^H{"|7,wEXz, *zγTiT0Q#d) |iaca,%5t?nbZK1prr=5KZsVOG˚ԊYv9Ku K*n.,+7*7`7nT.C <[U'6yZ <BJ*űU%6Z)rxsHnGWˎΪJZ*!Totnu;hFgwٲEnaZt_jHܲf:tʌK!DeL7UwlƪR.{HR|?E^ )H9[q6HE[+Irgv)D=Ft~hiu=r ZB^0Xiܴu@Ϋ&r"pb.QC ;hG @Bak8N; 4RIk`J- aUʓtfFb(!ɒBl"c0Τ2+;I! !&NY0D'eE WbDP0s) d)A)]թHȜ:= DC`>>cW+۷S9}Lx Jp(~<:F&FiK&=fF,g!Tit:2p(y=dyu[a <CJ(Fh ?1rnI̅inhQpڸjN˜ )HE"MYZH T!dCކjSj37_exsjh2؁t9u~\ۭAΪ1lkߗu9*߲gC6oԁzGDh:3JQR ,Զ2Х'\"lVםQ*ch? J:_DdFVG{%Nkh-yyh~e[QJ>C L_GnGZRӾq*HtPjGVP)J*W)G_!m D_|0LtZߣ4{AM%Z(Ô?Z&:db4bS-۷317r.ЇϿ!0Ūw_5wYeva_NsҢ5cYDj o{U"š6=6!vO\mQױ&LA&c^ 3-+2Щ> N/lS`>:E0e5N՛nEbv̍\r7|~2ٜ^|z[![nR(S#X)CS7g>Z!uΑiy NX)R"hL,\۸ԍ%/5 -´sdE_-P$_[%6ݽ<ۗo_̾z ;Ufg#pu3l LJf_/y↠,X(Ac>338JG_hqK^c4/0~V <5#֞qȁK-I[,nc'NȢzDEKec8LNXvB@Rlqwun;GBO}."!4/з3BZ^RK$ 4pnRړ6f@zqBYhaiY9 ڑ`0O gfʼn u{dL!-orRI_fk~W7ϷY]%dmy$WXy2~S~{37o wdٻN %ZlEj#'Zjk~z ,ƮJ6vjoeX`xc tLݍ騻Ya|g{8@UOH~Yȯ=kj`zp_Oa0*uq?.0~;;i>nsR;+I}W ޝ]?|?{J{]7J.[w|u^n*'>x$+V3x5MͷT[9Ͼ)}i=,nYQe!8řՓx=nUܤ#ZP7bx^LSYY֭rvTe3;UsF+[W B1mvl +l:dȺ kw {ٶ$Uv {Mr@LxSK.C/F[` x+Xz\>8׭|zp*7WjLW\A{$#Q * a;&GC%K%/׭`KawHM.Rc>Sv0P+΍K V dx(RZm'c熆5$a.7} ?+3Hn/kR+"iGITK4X*XKԴ,O,=65릮v;3K%ciهUXz, c)׷m{g.57f)0^5z,%eMj 3'1KڗV=+AX[gR[P'5K c@'RRJoM_yBT<V,-]B_2֥)gy!Ӭ~0Ikw~Mw9\#nʭݥ4wMwna/*IMLΞK_-gA?%DjFa/ǯ{H컟~-gHChPIX*q}S$Jtp*i?*ѕ=ckJP'Qf jA-dwRŽJx *\K%=ϔʍL1 '>SIrA)= AW./UH|Ujwiьm7"o}k~ ۆbvz)A!iUr-ÒK luZ]lK'lÚ8۫2@dǰ뷻Aβ.%4lWX8FU BA: ѯi?ګwSߞPO/]lyf=KϽOlG_j~Ȫ!懲j.*8N9Gc,R=)Zy$NZ烊)iaHᵳ2{ K<#24}eo ֘"e8Jt>k3(J;}q*4 dLg({m!P0 ' T43`r.r"wsp^ˡ"6濚~2Ci:7(*Q0!ƂFKVGt Hj- Ld"*9z!F(FQFqwZ"ֶ ZM-ڈoc42E&!/:o PbR0FQ0{n0&b|-hΫ]~:7×yz?VI*e8Fi_ŐSBNTKtɭĚ-VoNӇ}QqɊ7^&}??_06܃oL;Ng9My8Ogu<ɄeoyZpwR=6VBET9͙G#sp5z 9S&H*KLP1lѧjhGRykI]|r*{hA'HKh"|5&b;FNQtܒ=A2t-HzXKp5hi+TODU &Z(vDFփIeFy飣0BP%KdS1 NHG'@}th][>ԩpyZKM!TTStJg<ܵTOlWj{C 9Qh4Azz:vB 2/< [:_s^{aF{AsZ9itpb?ҙW \}un~ 1-ot};wsmhȟ\E锢'?SoX7^w[STQc^ʘY晃['WF:uE G87[[STQc^3eҩ[['WuJ +ywTZz{O#/i\4ԶDA"/a$El)aJƣ„p tI<.=BV)IkV.tS#wa()kA.$k@rx@"dFvڠ{qN' -8L2 : +'u=sWS({ ?V x`_R@s`|N? Fo( Gri9;ψќAG WE80k88K$3/~)Ed^]A Kk„W;dg:+VyVR0 RGDn23&Z>lZcTfj'UBp;q$@Wg;L\`:Ŷ0e_=[`tWaKDglWR`KϏ楏f¼HAEDOUJNTsT?%MeccL[[p[_'Xc_>cha8 >?}m0}Fy߈>p)) nN%=t3=?{wtMX:,6 z+dȧZ.uF՟T'tS 64baf#NMO9r)<^=9(~t9y 5==Lɸ&0AW{/ˑ.Y"[*ޞ3zc> MӥpdO7 G:Z Sng_wwQ՛| W"mF>+zb8`@?z]oTnP&W~f;3皛{d:xY!!@* %1DB,D<"["Z1ݓ3jU`bu5[/T=& hEBIKF{D+@F ?>/5[WE)H'W:ǒ\e[s )Y-*(Ֆl`h9E,ݏGCv#KWPTfۏ[D_~cpouvyOgc KPQe)>O{!־[L4o)皛P&ʬ,I*6W=v]{;rCe(WY-Jp(g@<MJkQTt TTDP- 1LDCdV 9+$b&ѐ NiJlWhCۣWIBb>49Qf>* xL18?[m8>ȝweϊcXL'J o [ -T~K95sot<f[l Syjէš+g!ϯ(vvt'3mYN躺#wt}&[R  QjP⚰yȂN'a" |7g,;He1_avc(b4&ZM W)\/H o3tE9`мpBE4X1y!N:)ifV~T"MĨ%;*vĄ fPP -\RD£Yb=4xLŶ#c[);'(yE*e  L62?A;ц^#<6gbgvt\!n\aCxH/ >[Y:)1rN6p!mcm'Sy$/QVqpa":\:Kc睯/9 ۟ ,1hv0䰱X 4qY-n1lZݼ7N٬H[oVӏ:mߑ\pQF=˯FlT:PF2(DXbe3lⰡ-^M2f@RKlae]u)ևk9dbHy,@0X$TPu Zg=λq[hye& YsDlAL$ZURF5Q|eU$Ln*m>2yHF3մ%Jӌ!%mc^ñhzˑ|՟ FE3)׏ dFMKokw~꥖ZvAR4"y~C:JD?rN,|8|yU\~M(:۴t/w-ּa_^`݈DO҂3o 1(Tp_ݵtᔝA ҶM]Isr+f%֒X|p/bQ' \ԤH3]_f墕| =ҬuA,?Na,E"WEZ6v5r m>S3#iO>oɷI%9:p?p"R/"tpvpdfF/ {y L 1R 5 磎iÏIx- FMԉٞ,웋ɇǷuİË~KՅ Y_<'!uIqvފ5hq28(\ ?~ǎIou4!{vEvn?XN;O4QrKڒx-)9h?jQ4#_`p`P*:J=P. Y=`$Gt5^pNl0@j3UE{!{} KGGA0zww:Je&:м P@TF'Kz'DJ?.Y~bvPmsƖf?vIڻ&wI[K<.boxkavfxI=b4K2_fOP0dudn*[_G t1?)N)쭇yQO(8XlӘVn)`&2;kg{[3v)?%{#);8L']u sb}t `G&m<<쩱%N.7f'-|G fapl0O?mwm'<%@AM\|{:m9S0oNVk责މ|Gax# ѝuXؔ3-h]O;:dt3~wzr4/qO),y59\G&h >?XƔ돿 HB]kÒ -?zRzƳBO#V WF RHZt_XbG~qTu)S1*gTLF`tr5U I6=Kh/ꃟ]_-Y%V Ag+[usRRR rk,x> .a- 5,MBOho.q.JS*kspi7 Co*ߗWGHbn,oHVQRV҄j;%%_MsT`MX|Ksm J'g'g'_%<ɇKj,`CȞ) UaV3Sō=gC'Z-' z`0f)%FOYNEfl~EzZ L=>@9U+cG>axjtLB['9Ñ!}spB!N!᣿SPhw2^>&d *DD71NmWGtu!\&tQo5jk..~h0GW/Iy jB` [3hc6JK\?c,`י;kw8;OcO:,]euNm֏ /0кyYmTI ̙ \xX #ZnͥR+)$%q"a]Fh6-ʍZ譝ʒI8钀bW\$cN 7bMHKeVsնrluO A1S}֦TS` K7I\mJ|Xۡ &1:3TYzO_H}XxVq΃& ^۠u!nc` ƒ'ΠrwHqKF:Uaz׷̀m~<_ճ#j7S =4sI@Yjз(zL'_ WxKWV/ʿIzwERN۪ptH9?:d3\NFaMј|:U-h4 >F- \D}dEM*/ 9>\Y+nZ 뜣Vl2:E: f s+0Dz׿c/>xpRLм6^!vrѯ,eyzK;Q 鷄't O[؂fi+|;rH)@nlEVʞNQ˽=ޮ ҰEf׀gl8 `4>5fUZ.TF1vqI@z"TtI&,ʺ_: \c(pw¬&i۬lspޱ>2^8αѸT??cr ಺MA)3۔BfϏ0[S(6V}(5@ V@smq?k!?bRoѰ eݠnEZޠotY?|jOzfHu:Px{}mXcwSŃb6O賭7| ؾwZQ{a;CP7^CgKtvHڅt+Kܨ .DJIbMQ]s - G?Y_r'WgVVmu/ON?mo_\//E8 a 0.p~^oWoVw7pf|4+3cgyDcNg=ugg{g5WE E)\96n'|y\t -Zoxb\/ǽdfa~SeA㗟Q\ڙCP9xx j2J1rasPY,Jm/ `lش( !7QU4߃>7CZB\))\'AE֜KtdVd5^g?KEm϶Z&8$ppƉQ>ަh:Z˜2ѓ #лSj ; 9w h8aYl$Ӱ4,<و j\ :ckP0\?o h1I`'B΍SӎǬd(g&ƀf$y`7TIE^m[?8"~]Do5BP&kcbd OCl"%4:5|޼Yi<b=TZT#Aq,_:]1*uk+죿n]C"czI_!bh@Onwx/DU[!)ARdEgtP_EE*8f"UTg"EՐCh!whpH.C W1gmHF%dT 2tM @!4C~@VH {Z=^Xw_ "7 5!=WD+^LFI2>0#8Ƶ!0~IWB*UN5+XZ\.&@F:YZ @@ja@3N1dPoŒ 'TQDeN^66cp36u6\]ʀu8eѽ_9MۆٴpPu*C^`7mي3 bQ8 X79?/r!#LnKIeت׎}ۆ紳.S|*C̀ }jń2dQޗʁ*)5ѩ1k+jTN;ׂ*95Ibaӊ bXEq;-/¶\V-64GæiFaAE&˳dlV]SGIǞ^jTc!XUss6B+bN-ڕAضaTR_ljp><R` Φ&W*ǩ!z"yS+N]\.8R㩋Y6Lu;uq_pj\G.vO O]̿(9586Ypj+6,SSH NXVY N-uqRpj\1)m!ZK7 NmdĂS[ɻڐRTɩevM6~;8q\pjYGW|j0Ju!ŴS;);uqS6t%[";]rj֘ԥewu}5mHYSC޶v1Ò;q3خ#tɸGuCB|֓>FMW5զh>ŝX_|x~=5oŹr'ks$'$'$'$'$95斆;ks% ?WDѻ$rXh247wMfgS\P鄔,⑴⑴⑴⛤-RP^[Uccc7o[. ^Y}/Jo^لللل&"ui蕥^yc8q7ضH]Zye^^^^n458Gp0_ixNq;4f*\ھk5'qN㜄9 ߜ&ua{鵟jDlj&M9ѴEb+K8Dqoo؅m׎WTTTT$nⅥoΊ{=q^㼲yeߜWKܫ#x\:qo.v^Y5###7[.v8b'{d'{d'{d'&;y[Yu0qTǩnߜ"u)+{<7^m 9x,zSMu7yT盛(W <:\sr[ .)` x[edݕ|S tb5q|e_"u)7T8NESu=H)5<1 N%Yg]~rg֫=fjf\ͼP8PZz)ǥBz\ K!<D< ЛjL? Q?#~YgNϪO?`\~lKOLO|{92~M$z|M$k%[N,SD-8 OMiD%{巹ll<j@?q4Yq*a'c?d5ӑ~Mps?{䆝3=`rώT?;jPiĿZoԥ8eJ'bo+9?ne뵧UT6ٱƘ~V׉Hrj\ϊn?1Ϟk?ۧݳ Ýc9?vϾuIIe3\d1QK-IAk ` ߌ]3~ɾ;;+~\0 !]. y6oLfa1Ϟìw]g%/?iP6omO/P8 zz6l0fw`WjntkL& W`y3X5&C nT zXpJSh0qqlPF3k0OƀN})(_mLeuUdiT[`~.n')HqT"єaW^ 6H<9D1WR')F!2ڼ08_͓|O/ @/*ͪlԖ2m7ł~RѯuYMkƎ&εRڒ]2ٖ2ٷ6ejr;%(M=7!$Wp<݅| #jR/Eg-JAbG,]S[hqDHzo6Js>D%R;?<ǍL Ҙ߀o Y/|ϢQ|'V`*=4wERf 2IOoW큂L߾NήGWRG X|tb,u```4{B4effkfm19HgbԦZ$ejyI9j^vBT*V5k__A%l2gfddd&s8WTiVb8)(J%G H-uuJE(!a8wG218LR:CL>8.k*%A*%A*%A*%iR)ۨ Vu rZl@nEAekr-Z; h?i`B(Q)r7/].̪c̤cﰒ6C\ݵ4kN&AN&AN&AN&ir2(Zd\hg6!w FuH ,U TQR v|vwRCk ܝ+NZրqWmu;`;`;`;4mt&di V54'BBb͂j`bb\Ej 6TL䌃?YGCv΁">]m̚TETETETEVfRdk!v ]SN ǃ)8t'̵U%*_tP91tfb-#i|;ͣ@mrHȐ\p&L΀gJĉ E뺍08Ȏ/;IȜbR9RuKDX4g,D<5+P! ?΍T]Ͼ'nJI[I|O@O@O@O@|6 6X\P&8e ߍeC *Ν eC:ww~o}䷜tthm#GQIiE!/HJGDŽɠIP^ W"$e B@#h kF]L)zYTŎ,8#> 4im&J\Eऱe YeD29 y6`#Uk%\rjWX@$~"%*^ܤ5\ rR}K0FvH )/b3_ 8@p(4MY9 ʖ:$gMOY9fhMCΎP+Aei䩤 1_&t»\sqdۍ!\֣G h7ivR!U6dv[(X$ Ybסul\RQ3P9!B@b/RD;Uj&GAȬʗai'Ґdι$8DIp%iQC\bJl? Ƹळ,kebpbE-~ǸB@2mWp("KdHf7*w; H(jM&&a"+H  ZZxY\ńOd1Je\jvXrV_8m. ug0M'r 6Knzz&Mӛ9*Q,ˡ82L4Qs! |`u5ꥊ.ѯPvgP\˩@&_QωooN{87+*#;^L>~C$Aۉ$[ŵO梂2JI1!A=|<ԫNR+Ӏ/\"2ʍJg kknH _TK$v%q!rf^d^8[O7 H@d3D/=}4m*c>L˸ROm3/4$Kw'![HЧHl!A P)xDptjtZc,BP)2f6sRF-`U46/8'Y,{p{5:0NeZf Mv pXa %eJ)ko%e4Gr P3|4b㌂f@ʜm19݀"AR+?b=G R^M3lm.DH"AvQis5c)T0nL2 DE)898uDa/][i0|xه  :3 S9dĘ L8ܑc%e YXfIˬ< ##1Rp`Jsp|^$!Ѻm9pVfnO``l51͢qx`񸒭R ~K SeVZIi'\_p0+%?zKҔRJ\i5B#n#Y@%G77b9h &*QE b "ftx%*`{H5 B\F:Qv~ 1Ra~H#t/M?l|,:{$# vHʝ=΁C-/uHVI^V?CĨB|AB"SR3J:p v7/k*l014Ȭ,y7c6`7+ Hw]t7c!_i37O/GoDgm V&Jp/y'EwE<*U`4 Dk27@}`w7sKCLFz~y vm40u3j2^yy !NUZҟ>ث2J +(Yf4"\Y_]5^  5̪dVlAS-Pcw*hU# `ZU@7^7AGcC>)ep)e%C-o5!^|x-!b?:9q.4ZVέX!B g0v~f?qa ] JDM,Mt: bFùWFë?c!oq>XOϧS\|F vH9KgȤ޴'go^{gʑaYnot@gkuY2d"D8鲠CjNs͹}}[wVy6>ĢK1`5/µ,;OBDdJ EKOd}(#eL j;qI b* 4}ږ@8͕Ʋ@9yUeRKF)n1xxd.*oO78op2ޔ'cGzϽ,Lg>mJ"h5 ^XbwµjkEV&1P~0?P9c%bG!Mg__>יd/g8.m xVhيnTj7|?nnU7,N\w6%hs溟YLhAp2k/ax>nK0;ջkHl;R_ۮ%CU:JQ/^ڝHF-[Nm Һg?j!Upvz ,lT)K1 TSt):ZMXj)ڶnw0>g\`KT_J7n0j0{e B}tuO˵5stwR'ƣAg~S+[a4nP o`<~i4;۠ hOAsTU\ݘh7,{\ MQ 8/?{&*^M!r`!r:MY$EdKDqZeqݒ|*HJֱnX[STQcݎ(;hhu:uk^]Ѻ!_6)e;iݔ4>֔UT;X @h ukCCr[ W_1^x*_k+Տ/;xME;lniw|ʉ9{=Z殩hMrR)BGE?:kf];ohtrʞ?hTY wO~Cg E?ql3t0[\+ڝ%CC:0dQ v6}vMt O/漨E}J.Lmgܝ0]҃Qc6`P{V(K(( ,|^-[t-%d+>'j"'6%;k H N'sɜC4MdC/Qt,~QTF|',o0l"z`3$ )爷FGÌ>M-IKa@A$E&`L%h]gv35$k;}LBkXx7 {$VbkUYcX*Sd3 "]QD\?i·X@l3u[KiRZ&݅'8zc4*f1Sb98*/sH($:3g>{za4%!SQ />5hi48m=jJOEFh2ky3k6e-r?#O)=V+mE]Ct+ꤜ'1hpR~7L%@Dp͋ac mhQy y0:[w1Dǫ.$o'X͸=]Mj2gpu#wWwd/I{Dκͅ5wasbWW[5nd e6U2: VRb -\g$Jߗo{eZɂ N+pu 1m0XFyiBiYǵnZ_J7,=n>[mZ̕Z,G}QF,ǡ,hiaō`'5l-N o4@&ILY#d't YWfJ5H6VH#uYXE)# ?@N0e>0_L§|gegt6拱C`29juLzlg53⾹WY^=mV{k-56o=kVJksYisN~{7evQ_`?mI5-3ٞD(Zq:!ERY[3ҫ͑JfJǻ .(Y*P>3&peͿkh%+@P൨GK;/^=7ƏH+^J~(?qWvj"^:;H'aHJ1Cd H(J;DOd[$LN1]ZppEXL'2JgʫWp6_6[H0DZ#8ڂ#y[^2S9!```RIh̐:Y&SeR.t^e>5$}p$SxX0y1HvO FVXɆUJH*'UOJtUR|׷qtO+#i}@f[:5T˖~ZGD+@F 򱶃d.%TSr.=;~\ETRzG,u6 iJ&ULʦZZ*51.N=X,e*Pu)$M77 \{CxZ,^NeALԄ (aNT #1JLy!vjFY YC"5F|P,!T,#g26jCPL "̀QE",VnPT4ޕFr+e/y״X,B6_JbG8ƢO[VR4u{=#s{];.dկD@ryؚ0LS 8juaRph[d*e)VhL%4P[E!*P(λ˚2{*n>_ӝ2)7qыa7*}794S~/xlwdꣻx̕N-,_)c}F~㫇2xwܢFۋJP :^}?&4uH`hc[QpsCr{IUL?O228[JQ~x=kKt*Q g<ڍKkqyrǨQM2a8_oJ"JQ@m5qt@WC +f%^< R]rR}xv:fgs&a&n xͿznvΜ3?FejŪq3?=g//tGQgX!YI>DLZz|CAi+m*Ǖm5]L8ƴE =E"J ?Bz,H^Eof.ڈVJۛhGJ? ' ܷNP5uSpm jejۂkϯ>vWNjKS*k$oFdtH[IcmmăLj֒UAo(+$69nCͫ `% CjiThҜjP@ǨĢPCT[yL~]֩FzĦmeѺ;BOIjwg.,37V6e]&+>yLR^Am鴖,ҫwkY:DؔF9sǻ b ޭ |L;B̻ۘn]Xgn56%";6gKu;y[T۝n.U@`"]*9߾ܝ'xğo]gz2o]nixǩS PC3bO1#<3bERR.h?~~<67*8mTy4TtKUR1`h>RG%FOf7VcΓX5sDG~]NbZjRN 2I [,u0"Y.$Y1`#PT;Ҫ$<C"þU+^襠xR'륦 OPA( h3-l} ̫)؃~Jjෟ [y}ЈF9 '<@$gFސkѕ>7m'|l~QlSL5WLVL1A,9܂Xpad9w9Ke02>\>"+`Wc/D*h[aFS./ Hf, M4 ri2M$N3E|ey7^-i}Z;,)堣a|B2-V{˿ŷw/X|gVcr2]ɣFtwu݉.}&Nspdķ  O+xw~mǏXŒ>>pGwDzD}@zuBCw64*O=74xKn p6R`eVIj,LB|l1&~R3ꝲ/yTo%5㇗ T];" YZ|>_9/DGz Yj33)(ULP,MN86j:@\rx^H)jc*g@o>m n^ L*ዼ]f"oo7HP)mM8Kopd;ϳ<- p͸Vm1Ғ6 o?نg*b=ϊxyVtL?n;dJ B*g" C!j&@4L"՗5I 5gt&9<c*3yu ;{pvJ?!TFt1hYRE(.:k/CGva"ШOȮwiv,ܷW#YF4X+>hXMa*0ZV13.S{)Y#d-zƃAðjC`p{\ߌܔzFli8pi'8xpM\ 3yA@x^vr1\4sDMZoP-Nva4To4iapxmSq+BV*P q1*_s PY D[N֛ӹlz^"4aq6ߪfJ'-2MKleN:[FxB( VOO}k³JFPIܘ T@HDr: qi!S&Bstn -f ^z4%F5s=e{oIUY3՗uZ:ާxt"%Cӽ!C>nٍ+ q/TvO-sgG.ٞIX_j.Jq$JKs{V*S=npz`Dɨd`h9R0G-ڱrxR8cc(KDukᦒ'Is@~ZuuBM9;˔1]f&A,M<(asA?ރwP\2g3"4gY/xPa\6hRAFHHN Fݲ$%6-*"JbVd'YO1ڈJ!fp$c5#yY,(flYR1!ES&B4$,qy! yF" `u\pH{[dG?Ôor"1@/ESd~-K ' |`7uxo0ISyJ]Qڵ]} tk$Pv{*COC5y7ä/yj5wk^W9BJbZK7M3n#Z٣hGʑ:x1AӮwa._5xҡVZ:ܕ&  Zn-=)'AULUoZNӟaMà\[K G{ GG. tw\ΐ}@A?}o5HkeX׍oƷ>OVzwŎD6(.JAVrSBOl?e$Ts43i[~V*lK g% e~V<ܸa)@FNsR7΁IX)B gR.\fR?8V76BN(e<ҕ)+Tʑ`@qN}u"i"DmQI3QL_+-ݍҸɥQ}2C_?{_.c$cJWbt*UJzMrE̗l'DҥBoiYv_/\$?_!XFE+UJ4^furIr?CBt`cXA4X%{k 3J'ėkB4G[> э P˛Pfef;׿]xgg&ܡܫb+j} }lZv{X;|M+O9b.|-Seq:[Q&WD}^<\^Ln޵4x𫒽͒MuN٣FX{%|pD !TSՅw jJ@ %q@?*gmnAFvOީ"wJ~7=~]o:ڙB{mDz^G1VӅԅ;TI}W섊@v 54ړno%psg/$Z+kr S&X#l CK2ZeT6"MneVLA_0/qz+SWy|]SGZ~ ˴aK>{%U.hzN$i Vl&18jAQN#JAkTJ8 *TTrGH-PI,]ӕ܈\[,2C46= T zi/v{W^lR o,M\ҋn1-~\{!2Lv6xhb`g݁ԘI*Ǫ4kN$Rr#BliKi$q ڃKR8cR8^[L~[w}M*9`ߪon4X{ל*is7{ԝP/i^:ґa>)Pw~[͡EWjZJt7jxReOGy*u.Op <HRP>LsUIksOcK"/y$R"ׯ8'&E%u_OU)ίw-4xhضV116$gN*>Fn,c.#|#z'5֙L֙ Үāgi/tHiRAWw1HcN^{[Yk.wQ9ڗ _PέP GoLR<ۻd6Zn0%U-  Q42< qs^߆ڀXu[Zx}:(ޟ~8t,N<흮52Lȗ$7Ħ#XK6c /hgNX`Uo{K6ns޳ң@Sϐtɘ>zpfw77qY()<բrw[x0JR(^vBf=1eɁߨ_|XMz vo]ɿ+{zKb1=g쿙@O2I;L¯LܰNb?ݑ#k9) 3gY40rPU604xt]iyU!i܄i)>jл+1p%m8S#!<3qr,ZWyԾy 􀥲`+1zK*s>ʏ}X:W;?5/F.ēo-fN0.0~3_%%0 MgFF?㛳Q}>[0Y 7g[͒>snHWh|"pm$Ɩ1)ӣ7Fvbdʀ0g)܇>F¬W8NҪ/ّW ՏIFuzt&tz"rq@}}b3E-6 &zL8;I&uwj W2Ik'mv=p54H3i3YsdtTQfҕ1NEܢmٍn CPr: ?oRgc-49t %s((2}f35`9gi31LEܑ<3AȍryFMq6t95 iYhtϲ1wC,v<;X!ܶװGW׋o2w,d|v5/it!>^N=y89!5=o烈3ѝ|`u_/fϋjǣGg1Rwd566LR/$<9aj|5F%񭣶_di;,+QTM5v(E*QWdhu%^7OOc}sw}B|A,ز"B%4*GY~l+&Z;ڄ1T.H%;g*/xrl"TÛĕF93@n'Ba;` S (A7Ȭ05<)dZb*C <:]zn`F^s~ ۥn`z>&E<7Ikgާ :Ѧ(Ogb1/N8*Uj9Z88S`}*l)+Pv>X$mSԬR^3ޗZ-Õa8\iga CG%LQa% jdFkk_ *>%zh[7F Hاй˧]AGi[ gO (8BŁ9ѭކj`*.cu T٤רZJpWbSِDZ!N5C3DUuQ6d,,'U1חgң7gӛH>ސl7_uVռ<9$ڽ|;k%0Aw<Vy^]ُ;m!13`^t4OSG#W>sY==sypSw V=eI362MfIGnm1(ndzOA7=kʙڭ y"J9ysC'b#:mQݎyHAii[ڭ y"&SB%df90HE@FSH]E*^黷"ǿw61մLά:rz`b>Pd You%BWm{ibfd$8p-nh34eS|ƌйUSg⪩3Qp|(  $K[[3UIzfyz~ALM 3<3i(Ɗj.dce )[03 9@#m1P:S#{/~,!C6=47'7FXrV *7k t}uR^'"O_s\>r(g۫fh;+%  =YE< Kyc=8fDÐ<y)h>r 3^=_"p#E ` cRv͌azi/vKPK1|מ.Vi dzS7*ّ3(G-951A#$`* 2Ecd(_ko "GH! Π24m_uedaPG;j:TDfu j/\4;8 2m`4\i1t$3$! POC*iC-+p#0F$0 sr7S> ){-#7(\ɘv?O&tdi-RǦa@C'Qn8VG{ve(8:@(Рi4mkb;"C? +CfӁ5`200u+j픗Q>@R)B<Qf ְD *\5Su!!\DȔ=T hqnh7PFvkA}Gv;)Bn+,ej.$䙋LLJ Bq(h޵ݗBAs#2#aMC5RSLs~#}OO?]=\ƻy]{?(?܉}Grj/}|/o\ό+4q^'G_]ߥIɓA!u⟐Jo%?߮==S 5hx.]d Bt멻X1/A]Π))ى ntWgņiK`j291V+ ,5 Sr$[7XR2Z)%:0XuL#]grs`OsrLpކg`[#[ JĴb,oWy +F8S%gZ{_))$2vȎ nI&_ f3Wd9 ŖlZ~QLFvxa$LupGa6~5pq{KN_NSR*n,LJ~[zݽ30W@^Ef{Dr!GEoyjD%IQRi*_MɰK LSiZq^IqT!TJA^IP)4Vsv\T:R^uJ1zGJ)K*iøهyVsT҃V)i*S_Z)>KrR^JQT-lÊTb7gg1z"'\rXL1>oyX]n{kD0/\(қ e] W%lʽ[Ѣl Y)2kSHQ=9.=dQ+L XWti#7MYߕrsq90rC|Բ[Z)+(z{ ;>*0Ps3yV``0GI.e( ɢFz22e@fTpz 2N.aT#.vgNkW;&^)JwNo%NzĵA/7r܄@y,. CB(ȼ,,.7*qi=|kWXNA2.-GJРZ6,G=E)Nb36\QN@R!Jp?uFVz!DԒ%Z%kPQ%nj$-^l1}-C_حOyY86H ƶW8<!8| o1Ro0^ $lpS-•&; WoE++;V-U>Q,n'. w[H{etQ}*vk/Ԣt!B 7jzjV8-oHE>Iن h}r[r3zވQIlϜPcq#ӛ񧔬D.[v1ETUI}#bg{ǝB`, C%  1Dn53]6 _+VTo6¿Jv{]Xm`/R ߄RhZ[ٶiKVk&aɮT&'w7>zk'ρO\4ޏ pub"hv7"N^&[ %IS3E»u I:Ȼ00ʨyg\R[ %ZTᅘR0eυ^=?W8<K SdiG62J0{aYk/5{òLܫ)HkV׭#EYKQi PQyV ƎE[i%uQQCL%#Y+Ju×Zp}TaG|,G|^GjzQR8riB2q/3J5єxT?oyj4GJ2FBߌDJ P~~HjI>y]xa/& M; ˛wg.ЏdPCj0xe$ĥ ݗ ^l\6Bs;k'Y<9A-C0kH9Nu U54Vo柹};qQ7z1ϢgJB"wĮ󡟸!l 7|+l6pV3ǹ #;i̓j[}^:GJIS)c'zV)7i*-F&o[i*鈯q/T #@ZGq5U{ e'e6/ Z\ ys062_ U^X,p`ܹlN1a׍ " ϱǎ7$W +wtR<Ǝ??믹#FSz3n(.. w;i|/d؇_6~MUuuD웲ygFɼ:y`jqg~/'SO̜l*^9({ff7@hIK,MYs)fqa4SDp Q+Dp!/ zOioQ%>(19σZ:!w)O>p/$xà:( A{=2xH WϜY_c_~4Tlu9a3214 OqS1l쥩r37~1x4_.]_vwԄEJ,ؽ)xR{v@JF,BsRrҁ퀔>S2dS'RFOklumn}@n7S6fVd\.Һ?Q>撾dI{#l CxAS9熙a(%L8̑.e J޿ =ZHfI&Z=R?ZgO"h3,_YJֿ6:2ӌ+2siZaՆ8+I U)*WQhvy!c?hÂdy퇅SzТ#tFF= EbH 8 O7dL]VR4U[13qt*7,79f98a #6ϔ.2 U+sM!LoJ[̿;A!B&7YuE\\hGbȌRI\kx'QWo(:އ'O]u:GP"W ٍǯ7̴fSey.BpBh>1繖UʹY /@SO`D*q R fM삥A(؈:L\9լ@F} ȅpAn#S&sB3 ƕ.-8a|[~*u%qDDv}-.~xwѓ|7MM1=zPOg-@_*  Fh*V{8?{ gwcLw?^\ŭjWNbJ! 9d7ne)ӗW!I(#n3(dpMgO)NniZF nfX}MZrH$\2E\gb.i}29|t2mH挟t[+ݧ}(Ѩ.ȭ!nBN~krsIȂ"fCif \V,ܳgB;!wo"ոq/om.~YD˺F(+Ev`yu PܷŻϳp2قgbXjK;sTVf"ޑD6P*qE@ëSigW@6kEݦ cp{=q D~8@Ěf/j XqLF U ^`$G0A[3RWrA镜:_I[ ]4U頔L:wJ[p& \ ,yQu XP&b(G\!0 9!'LΙܒ[ ¼P$OcØ2 .`h14#$ _A 5;t_ֻ :_g,7Prs8gbP![493;_XCmٖyP]XWyMi< "}WX-GM #j띊,WV&u2HYXU*ٴLx#L I,OYM5\㵏.T4EaPxCb1Cf[94 11B@TGD @~u? >G1?oZEmSGZHb6͍wrtRNFK65$ &^$ʅm:tM 0ߡn>V뼣|C!pMM}xgCz,RP6BM_n\Txd6Z,ע:9J* ^ XH{* ưc#~{؈vFtw[I50H$j4¦H4m4A3<-")J$ϫ5))H(J$n%@QZңY-ml bZia r?~+KmʫPIgG^/7]=nk0%w5a 'o$RF@4˲4./b:K)FPeܝۿus7Q"zt6^RpbѦ?TSpNi^"c).Ծ$1qjrgM<ŵskN'p't-+[XX<\sP3fZ}YBut3Y$^~6z%ev4ϧ}CI1>3`oe>Ѽı,XԤdJ'DZfM'KTda*)RXR̤qC(HPo0v{zEއ[[˂|p|{WL*K!4k2دW}a&Ldf׬~%HnͪҸ >O.3 a1~_WK;ѩ>qܸw%gOo,dyur(Oa3@Z ~8aՔ Gy [D"W[$F !S8UγPtou$+ACFN4]Ql2'2S޳<[NGb='WFW>)K9ky]k.ggI]\~Be35>zhgFK)`" AT&TKųZY̾⁑LĘ 4)Kj挸@@f4gCSp1YkoM¨Bi>x{ =ֲ6͒0JM.intJYjC:MX&z>dW)5[;V#+rBU?zюw4Q\ex jg{C4TxZ>ueo.U@ĺ>y20R\v'-ݓ- >O{u- Ijt3A }H׀R_ԐЎZUn! 7Pk,(XA1h0FSi;SBPGvl=Ż9=0yQrAFiyn8T (N4-E&.W!1`gt ̌ Y ' D+&J$5*Z\b-DEyuF@T-IId&́,娉1MI$2#ѱ(%Rׅ"M2ՙDj=A ͘ᢔڳ)NS\ⶕ=}-Aү+Lկ2#tɔ.LD"eU,akrV$:f6<8l3ʬ_= yryVXRcXp!,;;Gs&^ˌ_,_?/skᕄQd*vJ: q8Ob9ƐI\!p:u4HENy&߰lvEHőVjhZv3շ΀2>}iGfv_>jUHw he'(>S1nM4_e2~XR3{}OwwYnb۞g}myu{ɚ!Dѹ)^[}p͵O d XQ_QDҨznȡW$*ծ |W ;Il/jޛ׊u3z>|sa [y%^ M]RgR}]c!E{" 9BN JUoz7-G|`_%-7Ց(ʘ%%}zqҒVX02iUnʸ%W%-y.̬cvXZQ=$ !fH/fL:j\~ag̑B#8vRXR<p&RyA~^0 K쨖LpO-_,pE: qTp̖/RTzp֋tri!j)i?-%]JVTsSak0~Z*R-`Z*TsNZzZj.}2 M~/T'-=p-Zpl yasnX |^廟*qf.-f͏{¦H Ep}%RHB ˋ3JPPig ) /w t+?hVJic K/[*\_Ky$gJVZ|q;uiL9 "g{۸WGi%EqsN4w[-Vx7^O5QsuR[eS˚IbM)4#U3֜HP\$Rq ieƝL77Rr az725bcrg=V}Lpvu i݈\w|GC3 Oġe sGKӏAHsH7IQ!r&? Ξfik.IXwٙ+P,ᴀjG ,Zb,Ȏ%X.$\B\-o0oNEw>E灖2fO^M@0rlqͪ/5E#)ėIi"L~4,@|ҋk\~U0]bgAvY%ƵI?) &{9z03~"f"LɊRMƦC5[[P_j7~#=Ws2{._ S!=E']v$R˽7Om/maPήHE\2)J)f̒$Fq(eRSkc'UdE2}'Nn)Ƅn:I)e- 0)>'/쭝.AL4J T%ώܭYod%/1NPqqkMgtׅeg?z1I*oO ֻkً C dP%(gfb׌68òU՘䦃׾ o |y6PcLn]68ղp`姲UJA|r`$͡btsΞ;[ JP_f!+Gc mID a2L )y*BǶ%ȥT4BqN;[Ɲ]scX3]<}e1%y Ad%2Ί/;'Y٤i`kgyLӎ!hN~ "U<1U;`U&q`wkz%SIl|7<}C Zrv]8?wϟ orWٰ}?S:݃_ίz)WP1oO>uY[/n6{#?ǮgϟwLuNyavWsDz{LW];5ܞR}aGz2ߎ#,5̍{YgycԷi>}E?^G<@ߨ;M0 ]a=ӹNawBb`p|Of'yt-ϞEZ2d|Wᕖ.#Bf(pe>~A@(LHFHDb.bp)LZH,vɒJ:1YDtlMjG1 cʗpR1yJx~Y ><*;9q[I0dŃ| oP`~-d+L6YIj 3ٽr"PeMx9\~Sd> >/")TѠzh[3Q~7sGM6UpJwx((5ut7R{3#54iL&LԌ2MQ 6B:ݼW**GBX<Q$Ǡ}B5RN.# bЊ6Yp0nę8#IN+ͦ`xW䴬pVkOrrAUn ߰@Y 1\|$z#d7ɥw89oIOx8J(\I!T.G!C'd[& KS˵D'qu,1yD#eM1%OmHn-c1:A7']R OwYEȟS g;r;鼇Ahe+mT[dQ4:!Bk`bX4ñdžKt+ {@cYUsQD"e%_㞥( I0n$H0HqkHr9[X(R%I~@6@ uR5R!- 䫘хVZFcRrM"$0*N ДőJbb8!a"Ea(RBHPwnˢOx X,DT,>- #R':dГɼ3@rGapZB 0 BORR{vY(,݌$]R¥PbГ *J^!iL*'C327FY)ugܳqC ި 1. g]&@GOؑKwdhx.WC3oT$h7fQDY"z75pA3/pnIɴ&5%򁀕ӱCDq+Oji36AiT7udDP {<*Yǭ={z2UK*, LNH'̯j0Ѕ4iv9+Ev˫xIXq()Wyx/+θ34n@c h&{]*1of}q]%Y]R-Xҧ(l/H< A/Eu{77P Ŝ-QDʍO͒٦6LOt>[H%Se1j$L \-ʌ)J~;$"qMiۈ$ւI87Ue;=352`.Qg( j bΊ6QD#ȴĢuy?9\B_ ֝8rO N;+rޓY}8%?DVߖBfO>[C]#%u79hZ=SU8TZ 8Cvi5uXMmTп7~tBVK(>_?rE"cf\Zy''_xL:v wD"D^;vy]Ȳx!bL%$&EZ""kHM!4bbiEB5~u)0l/.nnPQ7 .\_IDo#fctk7dChQQEf0dYvо&1E":JmPiÖ%BR!D s@[o)u"\nA{/)>>dnPb([0iPU`S& LL)%bXqB3),@Ie,b!&-y.ۺ+ \V @iKP* !Y.P"I"aX&$eũ4Hs9VJˑ"Zqkt5Bm9=TeWs[`}(V퇀zf"f?7MӫƔp]M8URohXv5;AA~b})t$:,ZF"(Wn1(UAl\e2.1!Z0peeIOeYG{L@IdjՖ XN00U[?˒@#9S[CȫY Wԏ++qE7W,pf6e{=RrԾ3ϭ!?p0@03k,p fŴ@=܎c'[["sα̟VW2p1ߠ늧h^홹˱1y^-$;Nſ7QgZƑ_Qjx9—uk#氳˖F%*IErXd"dfH|@l-e5*ƽԜr\aɶkoVN.cjVTF8BITPi\B:\ZJx <C] L~zp-F/wYF>"ƠO)r谲wA#%)tqKg%aM9rzkszͧRz{.Bwx6|7_%Gx,y{ZXhc//@E3ƛ77nj~?\2BKyZݷ\d!DlgMَJub:n",!yDևr-]T3-&X'Jub:nc"|0V~6 ѻa!DeDn(}q;iybx]$[ @"e ~GOJqA߼7K<+>+;Lw4Dh뺚6wwwG5>^QU_IzH.~ E߾MIg6H&͏ `{iG'K_x%9T:~]qVY] [45`*lWeQh ;hͣ'3F#;% !$zisBv y}\2ܧ?!hbRY&3`ZkQW!=9?ugtX a1}SR]a\9L/oS^Nok"(a@Pk-*DUWR&Fx=x۫Vy^s_:y)-<$֯8DL9 ۧ щP$! +}'4ÁHϑ>HƧ|}j{׳_D"P$]$!ұ'2Q^9#CbVҶ65F[k%zҀ:'\3[)hv\ 9~NݳKGa'-! y&٦d]Vȹ|s1;9[8xfBH^XTSh/O&&H{WF%kW>,x %\V>T CGfZm6W^_" _/췺U}@ȔAdG+l8fw1s7cЬce fq .:RͥC^n +t^^J6 /4B!C!9ē C؂D_L|1Acu6qd8 wO۷_LGېH _?z^PcT*QIHCHjt'*My'Wx)KheӉâ1t!D_5(P(xDv@vR2r*,OI U f`<džL'4IO<2.P[=4IO5JkH1!)Pz@zh2JW;$z-#%^#GG z~/%:d\X<G WKM Zzm 8fMcټ@Bo1ME2Ԩk.r/ki9mxA"kOXP*D`qƦaR-5&H Z=;ujy*}zNʩnF '(_HLӒ`Bq%W" nq% )Bi܆+$EoܺѐM!:rwS&f=5&%,b\ _ dl .q94h:(g:k Cٿ2uzԀ@0$y&G^)3׻yH0م,᩼΍ Wڳ)VMXaDG2Khs9R&V2Ļ0e'!^2<  !_k]xE'bY 8VW*}ސ2 Lj앿_[%r{vL\~:IX9RAM9(e[&aU{"DP ⩃ EҞeKD5 S6g,_v<7~ ~ߢo  K׹Rtܢo6(!2thkʭu8(^TIY#ZVC`(' e2'{*>.>/ooţ+jaFvD$|Ov;-<5VxlNOs<))\iqʧyT䠜YOxS.S>-A*9 |go)zpw|EӃF GWj{G/(jZǻԊz◫Do* 6mf"^bwRXcqFZqᗛs-?\-܋yp8ǫ&&ۏV[??,[Q_3nDv_w 9ZzXIox3m==R!,NMlhe׶/4칟SQ qԪAT̖*ZS.C%A๳`fں0WAsґXR_nHpv'!qa 63{5ш-"r E9ԑnRײtȎpiQ1KԍRTMcލT9VIpuݔ&JMgR$!/xN[I[iZ-f+I> +M_l'nҬB@: +f l'miV#>5 +=$ԂP`g+=i+ME+V14+R ytVgOaT'Ξj$ٗ2HRZIX)4+]Hξ4m,^7l SeɨFVK_ CVF9-BqA)5M5($w t̓^W:BК6 Z#Zn uc,Zvr"zvP:5ڻz=mXu`plXvH!쀄`ˇ1 s~;<<4iUd AGfXS?J˹z9j_HHwCb#sֺqԁ\:ni26R0Y-$fdz2 E7}JzVg%Rcw!矢s%}m_et/Q~ViՊ_>k?.8g3#HwdyH jv|D;0_58|?^1D {2h  ]n)GDFkF_3ܨXyvţ+'6},$sR:dAFC+~ZggQ!% BϽg X)bBNF+ 3)w{& &.@RI>iHK5=c>KlA;&Dzx})/Mm@p=Ag 6޵6#ȗݞ Xd{3]4(xy qg1Ȳ,ɉ{Hz,c E52&]JNHuy %X4x i[Ak]N۶J_(41~C5FpcRܳBy`+T_֨TjNZzZ "LKi1"Bejӄ#Ұl_7e2~`V֥gdw_\,WY!sUBܲ)5=f:eZ/;El[m0g '2kp_K9N/A`:ڪk!*|:N{}2犡JO)QvJҩ `-!ع|jt&UZNB$_-%P@Or߷ */z@{Ex|1VaA`W-+d$)J)DK<4qXEdpd';u!D)y`LS(B+(oِ/Aq*DِG@9k!r,Z mTN50R-h'iYhNxc!Xf9X%\$3T7PUA>wNȠ*Yj,U@LSm dO3~T+ "bYQ*v[i23Z<^OŽu~ɿ'ǎ^~ѹ@]sĸ5V1!s_ 3jrv(7J3Œ".AhI M"%b{bD>}#in̗~_۬jE^_=W;|uT?}2I3ė_o?+0ۯmՏk_Aq} #̧? gӏ_9r\ E}#KtoO?$"\Vܭy[h`N:3k- .u]ltgg7WV/W_-AV3#p\3YDϴc;a `Pܣr0SH.Εl{k~ޯoz;q›"oh<9&]5K &Me_DNo bckШ~m|/̜׀~A7=?%0)-rvP#4 <ˎ\r?؋+cآפ`ս;dt'QE6od戛!k.:\%nz0"^ (kdN;fv"}v"^};z$ +ƶm[HQB%,T]b *yQT'lO5ЁT#别) TL.Tze?//&d}?.z)*[+.k07% {n}\+M'ogý }(?rr4>l7痁uo~ >eEwgi;W$bdqz׻ʨ[*BT'1mTޛwK= n]h;W$bb wSQzTN3bp/S֟wK/n]h;WњNS>B8GSa zApu3 DU|ig]16YZFb{ Y_5ю;*4O_\残O,og!\VQns,dRioY>0wTR4Ϳ|H`2 Co0[^o_<>h`$ ]s*|$Sx( dIIGRjYL6#sJÂ*ˁg8O0!9ygqrBaҠ*3%۝H04,=+8`v~wNү6pA.}DSS CT`1ĺ@m u^TfV OyhvFc5d]K?_gz&-~Ѽ4{wz-lSύ}vGoGvNXZ# C2NraD ia#qgrËbNWC) 7VmU!(uoun^^zkBvUl{1dV@%1pwp2Y*eLZ dJٞ6 hbݭ l.ƠxHeG0!Jv1,W 8@NE8EV͜%ck:C?0jXk}!{F^vL<ܱԘFF# gs?1T0B9!OQ2wMς1>jR>nxn9q4L88x !ˆo`ka8TQfhVHTXS YZ %Z8̩ࡳ~put>DKrLwTtXpu;Q*i,폺l7~,2؁:92﹟$1&s}7x"/cJQ?5h~ ?y㱐o_Lq8`Og.@̇j~32\B1b\2"L>vה30:a«Xe}@"E0f~SjGGw&a q\+Z!Ƶ@YF ;4,q/^nuD}SJvYʰ:l&6}1W&\wUGZG c\+"z`xMßɨHiI..I_ݻDҭD$jQzb,ʏUg\̧e$!Ae%a90pP8Φw, Tt]:C؛(o.3a'>jw 4#0*_S1FtCz |(exz%C(HoM5K.T4I 8 7!sFS3 wepopsQ]h;W$rz׻IŠTN3bpٗ)y@օsM)Ukƹݰ@ DuR݆Ey@օs ):.KT;5W^ZXnr{wU쏎NҨ;pv?_̳BБٵE{8"IA>#ƻ w9VPAd[U4I$}ƹHe>D'>#Od]'?>Q(O y*ZөZCk e]5TkrZCXF=Tdlo|믣,|1yӃ)~|DeXP {XG=E#uv?*{KD˳zhe=݊*P؏qWa|6"vIaP҂N(=/aH*.CddI"vE+àv^]c9D$Fc%,vMdj~¶W˾TD#ġL8A|AB&t['[0 gq>]E>qe\lcƕ]8Jm Hm ;OA SZv"F}uyLw+5G#x^9Î4=Ov@ Z_QsѿwpCV%1# ;^JE:ԁOs.!!d68O?7'CC("7QqdE+@jh0$"54OtaIq5 4΀ AkX9 %$(rg T8=CĖ2Z}jAu޿44sc,$Bν!H܈an(c2QG!O f-x#֕t!̴v"z;j ރ~=Q~)?D wĂmDl`_ug.eU >,U.>1Ҩh\~>"2OlGMSAUQoHo|11%y7n9$_py?bzQudZ qkՕ Y>}h4@ЧM8vxSxPmcn5a<#r@Z<&>[VLosýoϢ\?=:‹B]7~DY~ˮ_׶̈(~ЇPhBjO%zOse˿mZlkjCv8J"ʄ E4JZԯ㴮|xP3eμ5PP 0rԈ^h^5&hЉtJ '`51 Y=8K;̒E7!KG%n!6Unէţ:˥c8:>+HC!!k z;*9>lGx#7|zjrMq`/j.>~oS\UVZg.9wﲼv;8-"M/=@0&0irk#E~2}hW+N0b+YO֊P ,y^ 9`9͜1%'Cmx[`,2,j8L3G㢓$GY}+r5PS+m6UIa  ~l m\nvVcqcʱ7_ c~a(i1>%A>jgCd9dZS53IN Kd\ D\bIZi,C%4Uy"ŎARHܥκ"% SV2 iV$P\JHäzйmkY(gKT -UgKtQh΂2YAdfKJxέQ$,6ZR@j=u-$,uY o$)5xs^ЉXus΁kNhUsRJ}@Hd#k$g?o7םA=PPj K>K1_6K]Qa\Okf$,rqbgL9ɳT%HulU6F%ɬ p!ITD]V8ڮ8YS@;]xk~3DK-k#Ἰ\煷kɽ勪rd:-?_'vJU_ޕnӇZrE5cQ '(fE,z3e? -SX#D˻_h.5Z}dvwJoogݕ!K@F%w}(*_[t)$ `uG*)%P6MTnIR*zl#P~P&v8ԗMqtV' R%տq]E~Sa{GP  Fa ,Yg-tpGSx.G@I'tCoYoՃ&tmYGx="5"L.BƧlžX|ݡdl+R!7SoffRJL^Eoqct7LG,ő 6̃N< ҙ Κ)#g1PlP3=}Z+ͰX^gϩ_g?ʰ*ʯz?S`}oFCyr6~ZS%IapH^Q#w!%)G0r3Zj%YDj'JdEEn3VhVhE gBӍbEPb.d +,AhnB>oЬ|\zJ-u|mZMT.sOR"[(Ol\3ͮ~goWEtӛkT~/nHlQ8{U4o|?%;~>_ 4X ,hGˈ0Q'WQZmӊ(%3D9Ǡ=6f^w/!/["\4yؤwQ8 >w xߥ X=ЫL@"n3埉0~TK&R=ӺIk Ju.nzE0aݳįpnYҷs6@Ik`xED+$2 )8-\*TQPbILkdI+bJ6qmP_6kSBtѥ?Λ м! ̪Y;ŐI TaPe4̤5:47z*ecˌ|EԶ HQIW4{TIիͭBÒ°moy9*VT[aaI) Z`KIv Kd$oۢeRGu?8c S5s! 1F0|[BZ>q)U6LJf(tԗM%RzR)LJRb]jRMaR꡶٫A]@jAXohQ <$=[{# lN,E˿\S"#VZ^A?}hX{t !w\@4Y랈CPIf4AIN dy\*?y톹ŔC^<].X3\3_z|m (nV}G~{Z76{jzrTZmT#FWwFwVFj|48'>FSҽ2>VlM2gYv]$xw5^\\'W_x0wA%!ݝ2j9UKuU|~enL|ԗ9[Z| ч@_PÀ9 qxbjcvr6I7P ܱ^V451BE6Y+dc9q֚3XB֞opy[^}݂ #(niMr_Rw$o_╡Z[qn%}@#<]^e4 F8AGrP%{wBL 5Ӊq-RZ4*jBPR!X>(L:צE*pQ'Hc YMZe pY)ڑR:9)ZOdSt ֙[u Ut$9=qԇNd9GHr.,A9հc zxj'Xg:Ivt;8\==춃1T:9Ѡ<:MQ Oq`HpP5k]k$WdI{aBH'0} 4=V*awb^I^ٲ%@X0jv&щ{sA{2 {S2{o#0Wim"`RܦK=H}|:*q2mO*NXՕ\0~)#TfteQQ%#xB 0P+H%fcPl0e*F1#)y>p.|=}YKXk8JHXw!&hbN4_:S|8);ߜj ֛n/j\KYً{s*v#N93 %r=#\s2᠎ߗĵ64{sI"rus]8~ @uʗm 7-ƕb0СwloTUn@sIMu>>!{"qrgB$KhdLt|. 'I9ċUTMCtL8B& ge< ]_Ѹ]=aac sQ1@Q;ʨ :n.6&N>ـ[ZYH۪u)$c :U0Q ;*bԮY.>o}\I cIf7IHʻ1$<$>[ƒоgˇ ]7%?5* RV^j?W+I¶i yT\q.FE>{1H]JL@pps%na|Q y_$3(jʎ>ԅ0&6 aUd,\#MsI#'n@2kvI^Rh$e]bOÏ泡16DZۻXb_倴v* |爵z~_6ˈf饼Y+[J*a?bt J"T $OO=̅&k`ٹ4A4 IXCU{W29\`t| ζh8S(2GF"nzY|Tq"LY}+=J@Ϋ:>A1m&Wo pX xe M*Xx1ua%! m9ƅ)2]nӘŷۏpNB,,r*OE}_*ϧ+W(ُI$\o6ZQ3|mM)R9u\" t;W*ֆtQo"fViTd ݼ&V/"#zzuw 1s9ie +MԊG%gGO%G8qlD{(vMB1S2J$Jy!ϖk1#Ă_pg3]LoҽHCJ($Œ?i&>f) }y5-/Ol~F[[YH5[PLV5WmbmZL?[% <jxJ%0dMc\]mKkO^S9Nu^QBHR/\ XO׉.FXAHfVPmkSbse\yFt㳿*u9(פovuF 6:b'2T? +˧t#y]k}{'=bZUc~i!_ڴ,!TΣGcryzY -( 쵯"Ox^1z 7w;u= iP pTH*-`}JR1=1o<5nwM,58s,ǣjcGt1+1]PQ0|ƚabs=-/Ju('QA}3ILLsv|/- Ób6< !,n<: >R7ڦ:/r&Ί : ?z2[g:XF8]>Q+ OI9t(Y>I`\\-MK7c #皶B&3I|axmtj8UNQ0m'wQowIo񼊑LJc߸gYb+xk۲7K(P74hϞ_yj F:J6K:=:^zqi6E\Ct6SSkÛELmLry噋Q0بWǬ|@qaFx\_oƇl̈\H}gId!5 8afiG-,^}*#~鲠_DI^Eh ZC$l1\.zid4NBgKa@Є_FY8qji >ю.'(ҥ_ B)p0\Zzt: d:ŕU{ԽsW;Yr'f>R/t3W9ْ7Η2cطö8m.>N5!SƸAh^D\ܨ:E.߿A⪉EiVԴa-***KTdOE|(' #K!xAx[%Fǧ+RlF!f1վ*=[TK&"u{Q:\+c J^k2aJ=Ĝ=H*ޖˏXR>!\7ǝ 'kgXe危p:r;_[5x:s6l!Oyl{Il4L𧣣\ zpx 9Ud~|/ l_W9i au1:UˆmRɂ ڙ:-/gE7&}=ϩjd6E,Rzpb56tgơXb_zNכן}1!sxʸ YԪ-Qyx1]>..C;#:ނD}5IyCa;ϕ>^?7Yl :- "7t66B^E%mj8*"ؤMP6n߶4#Xit?~yj2Si+Miǫb IXqӬ<]?V4+p5;\Jv{\KUj35`1M⊄> 9e|KOYR6Z8g} HXS,{6^NQ^٢oЊY X,FF¨AC)r5W "/-jؐ- +ZC/-~lZ ay?W{+;>e C%5V~t>XE쨟 ,dhRfȺΘǼA_|>@,^Ew U2 5SV(&r΋Gg~ ~ah?;>~hd+~BX:Ħt2hAߏT%|ksۧL,z= T>Î2CE}Wp?&g->((yΐd&_JF->SEG#77f̘z4]U莥AUmV}4~)?{VbZ#ͅ!/( @~0W۱|8q66(9^)`ed>C [j/#@f>u xӗė;MCKuBґ8t/lZ؇FO.b"x+%#j}w{$#}a/W8ڏW\0dZPI:J$Qχ)0l1RAvۏi 0`_ PZ4YtoL;N=#=ӻpuo |z@*5\,VxEBy]R_P` ?RV!揫?$Ŷ-9-y>'O"UlBT O)$qMx>O5ai}qO B#V5j޷[T7jWiH;ލI-#utt%LgTys[phCTƛ>(pM ƚ>xxa^R}kצ7T8l5:]aW~-6 ֳ}A?ׂ?_<=cu^2}Ԕ-1Mkp^/H#0ZzZ}qH9 ;89-d@^Vk׭ ȍGO@ 4߇ލ9OwRY8/ 6I-PPaG%ډ%bMB,qg=+#5Zx3Ä[l+I{K8T@K68} 7ՔHpN6lIMIf'{84hznqb@N|tF[8SGǻ^b)rwC(peֳ hB=tޔ$dL!/9צٙ8qvhoݑz=qv'kz06I;;QK %9 47u9@s_ӲsxP`GQ򫕞H+EHud ^l{k""!cFGEm8t?[qÇFQ֯~u#-_ވjuosB_\ݪ)l͖{6 TigsOhHBsA;M¿/>]$-M D P-E [o)=AR@'7U~ڡrT%0g7{ #H,g]D~IR1'Vm};,ɐLTzrUؑYنAc/>ԃF꜕=/j2Tb&.qoA&/_O0CnՇG uYxMpE㲐)acCǑб /"n siئBF!|InLM#кfIJGe7eߌc yS鬒EkZ/>mʦWbS;`:}k SsH?TWڋ(H:eV$k7ŋS9>zBS"Q^fJr%*wұ(hu}oOJ!$> J-Q/ @jc#LdjkmQ7ޗ!B)BUzL9ŠSbЖ{WvJaȮ8O !V_Y; Ϳ˯T6O?˯Ϳ?W>Z7(q&bF:AdZ}-nQ m:Yj\H^I]xqSQUR- ;[b&B2wzm-qt"X@=Ҙ] HO2v7[ha ƶ^kڨAd\iI/MQYg| ^cqYDfp9iIp}\PIt2uu vtZ [bO]p"!]xφ5y vބSQdw? #9xwZ@{*SAueD7hxڇw,45N}v1y9OpU(0Cìo.P %_jU>6T)6K;.H/?X%+63`RK{Z@sD۔ @}U8VށX{pr@^܉u~f"J$;zCjdRD&v 1>P%s8}ׂq{}33T(Vy*q,QXGu;\'Ή@+Bk'5I88ެn혛ۧBƃȩW)ܲ: 5;%Cp>^~hai9C,oV{8Ax-ٟg. AE,,){E7WXbjzQ똷I9͌N\) H{+_ g 9H% <+zl0~Uo[nO+3Rf5ԗNhjk=2\U$q^;t:w cko$Zs!i0ZBXXzTls FnN$Ħ$Lܑ0H[q@x9głlNS{xĂrx"g]/"tiwZ9xoyrծj٩7PhܙvA,Fg;jFxs.|ʃrEKq+7MiקQnFi2(YxM(G?#Pѷ'2}#QRiSk\X #zcˁx]<) \dl2ɟ vzg zMn5y&O+\cMlv;c(x)VUS]5^cңe@^1)h^})GW_a ө`x8s\ Sч^Q. }E~4kA8?6Teҳ ..,f r?HS)w|W',S6?I {$s2.Ja[rΛ|܇Inmo\r j~c*T-خ^CMujIFA7nTR$Wd}@]j'*KiG6TU#PJES.@p|XR{JAp&_x%lF '7U#ÔA<ә'>bk^[*]0tdfI_5ZS%2%BɄDjIK[L[DZuCZw2aliAԚ [m)F8e%~UM"lBTg\![u% .GG?5X/Iq̘ctJߢUaw?Yi$z\Wy /x3qŏ~9 Ae W{pl0ɧs1 E~oy;CO3#l4=h+8A]vtwO+(,zg3tAq%}y O9~5N3I"c_!fHXSaC4V$p@w- 3d譞>j-;o5-fcoB@㝕}?<|;$xCE_Y:y5^A0RՒn'jgv-_ZzR -Z:!\SQ 9ҕJAljQ@r7+ t:*?x=FRQ`e(2GMfh3a%"$S' ʸ0f8K#Z! X2`K3Z`"`6"5YjOt'DlLJJOG10Ç_M]ffe^flo2bDɆTLIܡw8{['"{S. ˤx_E+=\Nr^i-6GH;A Jh7g&ϣ{4~Mݤ|{gKopF7_ߏg7Q#*G gd)ВےSm|m)-6VtbNc|@U*p~>b<#=w5E#1obMfz }h8tPph NKM0x `طk=b5!rz @>HyZnpҿ8Mi]#s۾SI)ä9RX͈ &|GTz/wɄdB1rtC`8s0VJS%')aIj0K\ aQZQQ.ع 曡3#h<-3w`*EqiĐ,DgP1.Ҥ$#x 92Y2·XqRW%[B:BnjW&=gCd$ 0N 3I圲﨣S}WuC\b! y mw h_b~@yw w 6ǟ)|U*X_q`s8h"8N)f h.x@6V]F^mPN0Vk`Q_` *{?wnzfW* 83S&ژicI&$϶_kMz7s"g܎c[D~twXfi, 1!ع9I΂nDLӓBn¦5NĔ:߂ۨXg!$pI7i!Zo]E eOpԪD8V^SZX? VAłafa)ښ5Zu NM~$cXmkcK_s& g186[ 7={dPŸ~Ju((Ut',,s~f)Cp"Μ13 (F!Fr`Y㌬|0]/*ћ)z,F _R8x| <_2 ie ̾ƭk{e,S]ߋ޻땅FjiM`Vg?áY}>Ѐȿr`];qzLo5^_ϒ@M#iV[7,6ԭ&-M" >l l/jOFr-r[5M+2P^}T. e;W?v2gg%o 6/g_I~e.;®nԿr&`ǹt0K|8 B8x־LY>1O [=AJYtN#-!5V%hoʧ;v2ZJЪ#QB'6V)ku.u1uie:W2j#Q󴲮ݠXc@:qXUTFv}|۹+}y=E7ݶ#4kUھ'\J{ŞˇFZo=ڵ{SSR8gZT7?<9r 8\2UM.= 9wFjl6?*-drћ[x..S[ގgôLfKCc >O~,5@P 4|2 ]^|P>= 2ubP,eE&>*OX XSjM+,ZҐtrruBukA4}GvukA4}Gv=gUB釶n+RZ64䅫:%%%)^ :?v/dHuW Iq`l/n§[1O>ލ>]I8k̓Ybm]%R Ib"ּPjL>O(WqCWUlOq\RJU\E%CvSƐ HIqKaq&FljqNρNi͟9Fի%y$Y >N:TW~v=^q: ?S˛d%a /MNP _[&ݗHy8&7"mH,\mP-eT#ȣ e&6ł+m[K*%G*IW4Jv/N;Hĉ_R-A9Mz8 DI>VM굎} )X#WBgIFu*fv sePSJ08)bF#^'sORh BcOLgDPC%& K6;KG 4!|Z(0Įbu!R?KMZ u5>!T*mF%03JĊTd&KYI3PKXNj5J%gI6`0GrYؘݳ i Hb(_~eltJXɌI3mCf 4%&1-R{a%P$U)3BQ!Fdr#(-Q^:ZZ.Nuj/TV1VGY3JQa # 9!J"&6w 2:xt9!1 fysûmP%R5:o*8JZITokX~hWzB9!w6;9E5RݕgKLTvAwTbDK%:%ꁠ0´::ӝv2ZYKq%ȉu6Al֓Ԣr:A3W-T0+.G*A@+*vu}VG.mP0l;ԦzR~o/ggS ؁rĹy Ô<ʾ޵GWTkFNdS 4|GZEG^:=ZE-h WF:ɯ N5 Duu;2wfݚZ64䅫h#=MIԙlA4}Gv]Yi֭ y*NI8-י =?Vg2du&9Sa3WjT)Th&./{ R&N QWR27f:0hj'ro Yӯ @f(Bp[B[!{L4KD8HiD*LodmMU?Wjf*|YjS۬9p$%Hw͍Fqx܌*U&/7[.w${7נ$D (+cO7 i\۫[gXl-F 7HrYݏo!쪤T*9!OAE6N}LJR)C Ak>?]# I1,HC,-yTIsJ"ļ26 Q',*gYd8]p5C2 f$Lkgry[EШO2{:kPR>:ռL0t$zȋ LjO!z*Ԝ?ᾳzȋ_ Khc͘&] RzzB: R B̪ˌI*J#lŅ*|JH.UY1kDs(l^2Bp6JBeY媫@)fE߰%)*.I d4KpI \ 9uFM1*J߃OZSʥ9/Kj5ہQ¨2&9Z0*͔J+*#s8Č#q"9C,y(?UA(4@@Q(mv` e:c>Dʩ%۩EA}Nmp32 F_l47++N-q@еYSD]HCr\5%* 0r|?Cve?P%;!Q~&C!7rr[ŷy}I}pp #]B5TS#qCH OeB%:6jl-P_ӊè sMJRah &YﱥUbJee-9 RPtL"wЖ:p>An18lGM K2{ NN w]Lf&(_NZ֤m;:'apĨaK0ᑈQ㴥tzxƱu*IQfi If3&55tnR tЛC Bj('T SV[o\(0 %*7J|:Ns:4&@eg#V0Ը5p7)eڂ-)H՗Dq *r2[)آK/*sʹ(@5)0WHFJ.iۿh̑0Z(F ZUK%ΓJBK*QۿA ˇ{r9,o]{I*% F4dR7H(Qeaȋ{7G4&D߬> KC4C4R35Li R wLMVq&ae'8a3/ ¤?V+83*tey굽w =lOk07 4+2H8%`>J{SY]-s.`F XXK`l%=;.ʌ< %bphJA4#ud|DqFLr;m}ϣvSfM_+?i4+}~qe29ON5cIvʣ/fޭLL+ J̞ a#v4VV 2fKZɐi%*Ň\|Ȫq,r1a :r>6T/h 4K&8;$.'8brF:k+KL6/z[/nqy}+zlUUmdea?>Z'y3/?~[USYn 'd7do}K뇫_K&p0e#'Dh&_]9r]Q[ RSfzJ v;R+};Ͱ~.SḘrfԌuҳRaV*MxEVzHԔ46+=G+f9=p2 +=$u[j*d+=?+easBS𥇥nK-%\|y[) R2!7]} ?ٟ9Sl u[R ٞ5k`נ|AE3Zh%&Ɛ{௒or8{И§vJ$MhuJ?]FTҦOQp 8M! bVRzJi'D.Fih0fҎUӦ4=n )^(86onq[귀;VGaܷB:Zɹ &GWIX+,I x}{$МH3rڅK/??EG;^QՃ9H9{*B! zBh]C^CM`p$M\VV@I+"RIR}hg R݋WԀݿ f`/k@_mhcFgrI7J `MgJr` "\SB7Vpk+PxA(Ѫ$']hv$jQ4bST(KPv,MWK jJB뜩8$J^3;k1R"LQ$ iܭB}Caw~oԛٿ"r=}Ժ*{x_~twk)"J *`eY'-,1Ԃ\ am T; j;P$RuI}#u䒪s::JuC5y Rgw` L &õd|W *V`lJUdоUC+K9hZqRHp jB.ݩ㗛}. 7Og>vPŏԠOne߯߯T-<BUEOI&Λ;aJfZ~kz}O3-zi\2 #Z.ˇ 5ׁJno;!T_H3l<)2zhwާ2P ͇+k6O{ 8ǝrQ)nKӷq3)H-api!el7ǿ\w9Ltl+ofbt7@UxyF_ged]jѻ!R ޟw%j kbEoE5s q#2A'H86y ^䣂>,u[j$/痫qVY)]$0+Ƶ뼭0+eu݇brVTCKY`[hVvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005773132215135725225017716 0ustar rootrootJan 26 15:34:01 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 15:34:01 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:01 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 15:34:02 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 15:34:02 crc kubenswrapper[4896]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.453925 4896 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461512 4896 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461537 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461544 4896 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461550 4896 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461556 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461561 4896 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461567 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461572 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461581 4896 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461604 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461609 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461614 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461619 4896 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461624 4896 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461629 4896 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461634 4896 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461639 4896 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461644 4896 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461649 4896 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461654 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461659 4896 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461664 4896 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461668 4896 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461673 4896 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461678 4896 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461685 4896 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461689 4896 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461694 4896 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461699 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461704 4896 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461709 4896 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461715 4896 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461721 4896 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461726 4896 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461732 4896 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461739 4896 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461745 4896 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461751 4896 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461757 4896 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461762 4896 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461768 4896 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461775 4896 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461780 4896 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461786 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461791 4896 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461796 4896 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461801 4896 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461806 4896 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461812 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461817 4896 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461822 4896 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461827 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461831 4896 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461836 4896 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461841 4896 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461845 4896 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461850 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461855 4896 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461860 4896 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461864 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461869 4896 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461874 4896 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461878 4896 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461883 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461887 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461893 4896 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461897 4896 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461902 4896 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461907 4896 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461911 4896 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.461916 4896 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462009 4896 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462020 4896 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462029 4896 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462036 4896 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462043 4896 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462050 4896 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462057 4896 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462064 4896 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462070 4896 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462076 4896 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462082 4896 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462088 4896 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462094 4896 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462101 4896 flags.go:64] FLAG: --cgroup-root="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462107 4896 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462113 4896 flags.go:64] FLAG: --client-ca-file="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462118 4896 flags.go:64] FLAG: --cloud-config="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462124 4896 flags.go:64] FLAG: --cloud-provider="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462129 4896 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462136 4896 flags.go:64] FLAG: --cluster-domain="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462142 4896 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462148 4896 flags.go:64] FLAG: --config-dir="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462153 4896 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462159 4896 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462167 4896 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462172 4896 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462178 4896 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462184 4896 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462190 4896 flags.go:64] FLAG: --contention-profiling="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462196 4896 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462201 4896 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462207 4896 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462213 4896 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462220 4896 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462226 4896 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462231 4896 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462237 4896 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462243 4896 flags.go:64] FLAG: --enable-server="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462248 4896 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462255 4896 flags.go:64] FLAG: --event-burst="100" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462261 4896 flags.go:64] FLAG: --event-qps="50" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462273 4896 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462279 4896 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462285 4896 flags.go:64] FLAG: --eviction-hard="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462292 4896 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462297 4896 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462304 4896 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462310 4896 flags.go:64] FLAG: --eviction-soft="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462315 4896 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462321 4896 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462326 4896 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462331 4896 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462337 4896 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462343 4896 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462348 4896 flags.go:64] FLAG: --feature-gates="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462355 4896 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462360 4896 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462366 4896 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462372 4896 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462378 4896 flags.go:64] FLAG: --healthz-port="10248" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462383 4896 flags.go:64] FLAG: --help="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462389 4896 flags.go:64] FLAG: --hostname-override="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462395 4896 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462400 4896 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462407 4896 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462413 4896 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462420 4896 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462425 4896 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462431 4896 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462436 4896 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462442 4896 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462447 4896 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462453 4896 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462459 4896 flags.go:64] FLAG: --kube-reserved="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462465 4896 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462470 4896 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462476 4896 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462485 4896 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462491 4896 flags.go:64] FLAG: --lock-file="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462497 4896 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462502 4896 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462508 4896 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462516 4896 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462522 4896 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462528 4896 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462534 4896 flags.go:64] FLAG: --logging-format="text" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462540 4896 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462546 4896 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462552 4896 flags.go:64] FLAG: --manifest-url="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462557 4896 flags.go:64] FLAG: --manifest-url-header="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462565 4896 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462570 4896 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462581 4896 flags.go:64] FLAG: --max-pods="110" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462603 4896 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462609 4896 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462615 4896 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462620 4896 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462626 4896 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462631 4896 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462637 4896 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462650 4896 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462656 4896 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462661 4896 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462667 4896 flags.go:64] FLAG: --pod-cidr="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462673 4896 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462680 4896 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462686 4896 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462691 4896 flags.go:64] FLAG: --pods-per-core="0" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462697 4896 flags.go:64] FLAG: --port="10250" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462703 4896 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462709 4896 flags.go:64] FLAG: --provider-id="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462714 4896 flags.go:64] FLAG: --qos-reserved="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462719 4896 flags.go:64] FLAG: --read-only-port="10255" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462726 4896 flags.go:64] FLAG: --register-node="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462732 4896 flags.go:64] FLAG: --register-schedulable="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462737 4896 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462747 4896 flags.go:64] FLAG: --registry-burst="10" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462753 4896 flags.go:64] FLAG: --registry-qps="5" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462758 4896 flags.go:64] FLAG: --reserved-cpus="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462766 4896 flags.go:64] FLAG: --reserved-memory="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462772 4896 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462778 4896 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462784 4896 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462789 4896 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462795 4896 flags.go:64] FLAG: --runonce="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462801 4896 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462806 4896 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462812 4896 flags.go:64] FLAG: --seccomp-default="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462818 4896 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462823 4896 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462829 4896 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462835 4896 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462841 4896 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462846 4896 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462851 4896 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462857 4896 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462863 4896 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462868 4896 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462874 4896 flags.go:64] FLAG: --system-cgroups="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462879 4896 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462888 4896 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462894 4896 flags.go:64] FLAG: --tls-cert-file="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462899 4896 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462906 4896 flags.go:64] FLAG: --tls-min-version="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462911 4896 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462917 4896 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462922 4896 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462928 4896 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462934 4896 flags.go:64] FLAG: --v="2" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462942 4896 flags.go:64] FLAG: --version="false" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462949 4896 flags.go:64] FLAG: --vmodule="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462956 4896 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.462962 4896 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463088 4896 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463094 4896 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463099 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463106 4896 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463112 4896 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463117 4896 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463122 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463126 4896 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463132 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463136 4896 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463141 4896 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463146 4896 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463151 4896 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463156 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463161 4896 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463165 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463170 4896 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463175 4896 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463180 4896 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463185 4896 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463190 4896 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463195 4896 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463199 4896 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463204 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463209 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463214 4896 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463219 4896 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463224 4896 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463230 4896 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463235 4896 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463239 4896 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463245 4896 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463250 4896 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463255 4896 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463260 4896 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463265 4896 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463270 4896 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463277 4896 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463283 4896 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463289 4896 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463295 4896 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463302 4896 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463308 4896 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463314 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463319 4896 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463324 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463330 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463335 4896 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463340 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463345 4896 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463351 4896 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463357 4896 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463362 4896 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463368 4896 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463374 4896 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463380 4896 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463385 4896 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463390 4896 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463394 4896 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463399 4896 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463440 4896 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463446 4896 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463451 4896 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463457 4896 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463464 4896 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463470 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463475 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463481 4896 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463487 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463492 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.463497 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.463506 4896 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.473652 4896 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.473701 4896 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473795 4896 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473803 4896 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473807 4896 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473811 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473815 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473819 4896 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473823 4896 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473827 4896 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473830 4896 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473834 4896 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473838 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473841 4896 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473845 4896 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473849 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473852 4896 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473856 4896 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473859 4896 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473863 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473866 4896 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473891 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473896 4896 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473900 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473904 4896 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473908 4896 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473912 4896 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473916 4896 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473919 4896 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473923 4896 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473926 4896 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473930 4896 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473934 4896 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473940 4896 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473945 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473950 4896 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473953 4896 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473957 4896 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473961 4896 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473965 4896 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473968 4896 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473972 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473976 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473979 4896 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473983 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473986 4896 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473990 4896 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473994 4896 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.473999 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474002 4896 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474009 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474013 4896 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474017 4896 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474026 4896 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474033 4896 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474037 4896 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474043 4896 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474047 4896 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474051 4896 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474055 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474058 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474062 4896 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474065 4896 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474069 4896 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474072 4896 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474076 4896 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474080 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474083 4896 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474088 4896 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474095 4896 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474099 4896 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474103 4896 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474108 4896 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:34:02 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.474116 4896 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474228 4896 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474235 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474239 4896 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474243 4896 feature_gate.go:330] unrecognized feature gate: Example Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474247 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474251 4896 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474254 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474257 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474261 4896 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474264 4896 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474268 4896 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474271 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474275 4896 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474279 4896 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474282 4896 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474286 4896 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474289 4896 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474293 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474296 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474300 4896 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474303 4896 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474307 4896 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474310 4896 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474316 4896 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474320 4896 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474324 4896 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474327 4896 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474330 4896 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474334 4896 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474337 4896 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474341 4896 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474344 4896 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474348 4896 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474351 4896 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474354 4896 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474358 4896 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474362 4896 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474367 4896 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474371 4896 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474375 4896 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474379 4896 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474384 4896 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474387 4896 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474390 4896 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474394 4896 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474397 4896 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474401 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474404 4896 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474408 4896 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474411 4896 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474416 4896 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474419 4896 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474423 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474426 4896 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474430 4896 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474433 4896 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474439 4896 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474443 4896 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474448 4896 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474453 4896 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474457 4896 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474462 4896 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474466 4896 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474470 4896 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474474 4896 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474478 4896 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474481 4896 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474485 4896 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474489 4896 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474493 4896 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.474497 4896 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.474503 4896 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.475301 4896 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.479023 4896 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.479156 4896 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.479878 4896 server.go:997] "Starting client certificate rotation" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.479906 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.480325 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 16:36:07.609733075 +0000 UTC Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.480405 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.488969 4896 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.491028 4896 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.494768 4896 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.502401 4896 log.go:25] "Validated CRI v1 runtime API" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.525714 4896 log.go:25] "Validated CRI v1 image API" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.527196 4896 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.530310 4896 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-15-29-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.530346 4896 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.549091 4896 manager.go:217] Machine: {Timestamp:2026-01-26 15:34:02.545106716 +0000 UTC m=+0.326987129 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6ce3bfcf-cf26-46a6-add0-2b999cc5fad1 BootID:adc9c92c-63cf-439c-8587-8eafa1c0384d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f5:37:06 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f5:37:06 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1b:f0:ef Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:37:05:c9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d1:e5:8e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:88:fc:18 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:da:7c:5c:73:e5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:0a:26:21:f0:a6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.549532 4896 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.549884 4896 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.550720 4896 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.550957 4896 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551000 4896 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551351 4896 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551362 4896 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551756 4896 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551793 4896 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.551997 4896 state_mem.go:36] "Initialized new in-memory state store" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.552122 4896 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.555022 4896 kubelet.go:418] "Attempting to sync node with API server" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.559226 4896 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.559257 4896 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.559273 4896 kubelet.go:324] "Adding apiserver pod source" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.559285 4896 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.562313 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.562388 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.562574 4896 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.562313 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.562773 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.563097 4896 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.563902 4896 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565012 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565040 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565053 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565065 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565083 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565111 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565124 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565150 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565164 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565176 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565192 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565202 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.565911 4896 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.568238 4896 server.go:1280] "Started kubelet" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.574739 4896 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.582761 4896 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.582947 4896 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.583449 4896 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.585949 4896 server.go:460] "Adding debug handlers to kubelet server" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.585026 4896 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e51cc8c0fe211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:34:02.568213009 +0000 UTC m=+0.350093412,LastTimestamp:2026-01-26 15:34:02.568213009 +0000 UTC m=+0.350093412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.590646 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.590673 4896 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.590936 4896 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.591006 4896 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.591077 4896 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.590932 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:53:01.59668363 +0000 UTC Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.590943 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.591595 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.591684 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.592815 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.593042 4896 factory.go:55] Registering systemd factory Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.593065 4896 factory.go:221] Registration of the systemd container factory successfully Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.595773 4896 factory.go:153] Registering CRI-O factory Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.595830 4896 factory.go:221] Registration of the crio container factory successfully Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.595930 4896 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.595962 4896 factory.go:103] Registering Raw factory Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.595985 4896 manager.go:1196] Started watching for new ooms in manager Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.596890 4896 manager.go:319] Starting recovery of all containers Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601147 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601484 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601501 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601514 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601529 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601541 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601556 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601569 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601603 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601619 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601634 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601648 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601660 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601674 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601688 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601701 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601714 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601726 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601738 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601752 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601763 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601795 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601807 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601821 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601833 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601845 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601864 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601879 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601932 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601945 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601958 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601970 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.601987 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602001 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602014 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602026 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602038 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602051 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602064 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602078 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602089 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602101 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602113 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602126 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602137 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602149 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602160 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602172 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602185 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602198 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602210 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602221 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602236 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602252 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602265 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602275 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602286 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602298 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602309 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602320 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602334 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602345 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602355 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602366 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602378 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602389 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602399 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602410 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602421 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602433 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602447 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602458 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602472 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602482 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602501 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602511 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602522 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602532 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602543 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602555 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602568 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602742 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602762 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602773 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602784 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602795 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602808 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602820 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602832 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602843 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602854 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.602867 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603226 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603262 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603293 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603308 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603331 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603353 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603367 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603386 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603400 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603414 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603434 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603447 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603477 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603499 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603514 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603534 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603555 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603569 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603616 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603639 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603657 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.603675 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604146 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604169 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604198 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604218 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604246 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604262 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604278 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604299 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604313 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604332 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604350 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604398 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604427 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604443 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604463 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604477 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604490 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604511 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604524 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604544 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604573 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604605 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604626 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604642 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604663 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604679 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604698 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604716 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604731 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604751 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604768 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604785 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604811 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604825 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604849 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604865 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604880 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604901 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604919 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604941 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604957 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604972 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.604991 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605006 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605027 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605080 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605095 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605119 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605136 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605149 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605169 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605184 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605207 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605222 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605236 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605258 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605275 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605295 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605310 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605323 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605345 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605359 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605380 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605394 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605409 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605429 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605445 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605465 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605482 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605496 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605519 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.605533 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.608988 4896 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609035 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609054 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609105 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609124 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609138 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609157 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609171 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609185 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609203 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609217 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609235 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609253 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609269 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609287 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609302 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609319 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609338 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609353 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609401 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609416 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609436 4896 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609451 4896 reconstruct.go:97] "Volume reconstruction finished" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.609459 4896 reconciler.go:26] "Reconciler: start to sync state" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.724125 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.727299 4896 manager.go:324] Recovery completed Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.741694 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.746116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.746350 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.746450 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.747521 4896 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.747542 4896 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.747571 4896 state_mem.go:36] "Initialized new in-memory state store" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.756136 4896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.757938 4896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.758016 4896 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 15:34:02 crc kubenswrapper[4896]: I0126 15:34:02.758046 4896 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.758110 4896 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 15:34:02 crc kubenswrapper[4896]: W0126 15:34:02.758667 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.758737 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.794274 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.824367 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.858442 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 15:34:02 crc kubenswrapper[4896]: E0126 15:34:02.924973 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.025814 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.058863 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.127014 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.196100 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.227227 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.328339 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.429063 4896 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.449571 4896 policy_none.go:49] "None policy: Start" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.450864 4896 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.450945 4896 state_mem.go:35] "Initializing new in-memory state store" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.459144 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.508703 4896 manager.go:334] "Starting Device Plugin manager" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.508760 4896 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.508774 4896 server.go:79] "Starting device plugin registration server" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.509202 4896 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.509226 4896 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.509898 4896 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.510048 4896 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.510059 4896 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.517946 4896 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.576123 4896 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.591260 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:11:48.09002951 +0000 UTC Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.609509 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.610985 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.611024 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.611033 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.611061 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.611755 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 26 15:34:03 crc kubenswrapper[4896]: W0126 15:34:03.702405 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.702570 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.812263 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.814349 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.814422 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.814650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:03 crc kubenswrapper[4896]: I0126 15:34:03.814695 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.815424 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 26 15:34:03 crc kubenswrapper[4896]: W0126 15:34:03.944094 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.944279 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:03 crc kubenswrapper[4896]: E0126 15:34:03.997919 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.153034 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:04 crc kubenswrapper[4896]: E0126 15:34:04.153171 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.216311 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.218088 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.218141 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.218162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.218208 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:04 crc kubenswrapper[4896]: E0126 15:34:04.219188 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.259297 4896 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.259522 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.261267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.261312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.261324 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.261503 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.262032 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.262084 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.262933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.262961 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.262970 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.263116 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.263262 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.263296 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.264011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.264082 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.264100 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.382488 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.382768 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.382842 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.382853 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.383105 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.383239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.383259 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.383420 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:04 crc kubenswrapper[4896]: E0126 15:34:04.383613 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.383780 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.383962 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.384083 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.385717 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.385768 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.385786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386036 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386094 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386101 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386107 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386119 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.386048 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387163 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387193 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387287 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387317 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387329 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387734 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.387786 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.388454 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.388480 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.388489 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.548575 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.548825 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.548871 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549170 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549290 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549420 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549475 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549536 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549599 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549664 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549686 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549703 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549808 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549839 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.549862 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.550556 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.576024 4896 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.591661 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:18:06.162164681 +0000 UTC Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.598394 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:34:04 crc kubenswrapper[4896]: E0126 15:34:04.599711 4896 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651491 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651650 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651690 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651725 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651762 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651804 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651809 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651826 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651891 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651913 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651816 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651860 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651932 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651996 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651998 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.651943 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652076 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652111 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652134 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652117 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652197 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652174 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652142 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652286 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652373 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.652519 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.689446 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.703540 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.722019 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.736220 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.761887 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-83b42d42ecfe248968c860c6cbddd005bb9f0c8ae33d8e42e816b50311dd9c14 WatchSource:0}: Error finding container 83b42d42ecfe248968c860c6cbddd005bb9f0c8ae33d8e42e816b50311dd9c14: Status 404 returned error can't find the container with id 83b42d42ecfe248968c860c6cbddd005bb9f0c8ae33d8e42e816b50311dd9c14 Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.763915 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051 WatchSource:0}: Error finding container c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051: Status 404 returned error can't find the container with id c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051 Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.770604 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2c957da81fb19d36021eef73cf38646530e01ca867216998c2126fa032564a43 WatchSource:0}: Error finding container 2c957da81fb19d36021eef73cf38646530e01ca867216998c2126fa032564a43: Status 404 returned error can't find the container with id 2c957da81fb19d36021eef73cf38646530e01ca867216998c2126fa032564a43 Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.781700 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8f446a2a40ffd05b565bd920335c23e18883c0007fa73f1a5a975de6c2dde102 WatchSource:0}: Error finding container 8f446a2a40ffd05b565bd920335c23e18883c0007fa73f1a5a975de6c2dde102: Status 404 returned error can't find the container with id 8f446a2a40ffd05b565bd920335c23e18883c0007fa73f1a5a975de6c2dde102 Jan 26 15:34:04 crc kubenswrapper[4896]: I0126 15:34:04.852619 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 15:34:04 crc kubenswrapper[4896]: W0126 15:34:04.870968 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8fde1dbd54b852bc008b0e4a68fe75b8c7efddeb97f0abc0c1c3838748f7d928 WatchSource:0}: Error finding container 8fde1dbd54b852bc008b0e4a68fe75b8c7efddeb97f0abc0c1c3838748f7d928: Status 404 returned error can't find the container with id 8fde1dbd54b852bc008b0e4a68fe75b8c7efddeb97f0abc0c1c3838748f7d928 Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.019773 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.023495 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.023542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.023555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.023603 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:05 crc kubenswrapper[4896]: E0126 15:34:05.024115 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 26 15:34:05 crc kubenswrapper[4896]: W0126 15:34:05.424877 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:05 crc kubenswrapper[4896]: E0126 15:34:05.424957 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.576374 4896 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.591994 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:07:21.215108316 +0000 UTC Jan 26 15:34:05 crc kubenswrapper[4896]: E0126 15:34:05.598850 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.767255 4896 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99" exitCode=0 Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.767338 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.767531 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2c957da81fb19d36021eef73cf38646530e01ca867216998c2126fa032564a43"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.767717 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.772476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.772519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.772533 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.774225 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.774263 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.774279 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.777448 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021" exitCode=0 Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.777520 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.777551 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83b42d42ecfe248968c860c6cbddd005bb9f0c8ae33d8e42e816b50311dd9c14"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.777707 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.778529 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.778559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.778569 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.780181 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.780256 4896 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458" exitCode=0 Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.780298 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.780340 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8fde1dbd54b852bc008b0e4a68fe75b8c7efddeb97f0abc0c1c3838748f7d928"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.780508 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.781285 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.781304 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.781312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.782105 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.782123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.782131 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.783231 4896 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492" exitCode=0 Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.783264 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.783285 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8f446a2a40ffd05b565bd920335c23e18883c0007fa73f1a5a975de6c2dde102"} Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.783352 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.785678 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.785737 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:05 crc kubenswrapper[4896]: I0126 15:34:05.785757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.576049 4896 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:06 crc kubenswrapper[4896]: W0126 15:34:06.578291 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:06 crc kubenswrapper[4896]: E0126 15:34:06.578456 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.594596 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:37:09.6871272 +0000 UTC Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.624798 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.626075 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.626108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.626116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.626136 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:06 crc kubenswrapper[4896]: E0126 15:34:06.626649 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 26 15:34:06 crc kubenswrapper[4896]: W0126 15:34:06.675911 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:06 crc kubenswrapper[4896]: E0126 15:34:06.675980 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.789007 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.789451 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.789469 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.790006 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.791171 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.791202 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.791214 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.792765 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.792805 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.792879 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.793486 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.793513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.793522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.795073 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.795126 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.795148 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.797166 4896 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0" exitCode=0 Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.797222 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.797326 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.798096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.798120 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.798129 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.799392 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61"} Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.799448 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.800108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.800210 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:06 crc kubenswrapper[4896]: I0126 15:34:06.800375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:06 crc kubenswrapper[4896]: W0126 15:34:06.987534 4896 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 26 15:34:06 crc kubenswrapper[4896]: E0126 15:34:06.987619 4896 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 26 15:34:07 crc kubenswrapper[4896]: E0126 15:34:07.037341 4896 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e51cc8c0fe211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:34:02.568213009 +0000 UTC m=+0.350093412,LastTimestamp:2026-01-26 15:34:02.568213009 +0000 UTC m=+0.350093412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.595783 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:54:56.385378007 +0000 UTC Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.807207 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79"} Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.807258 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21"} Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.807361 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.808507 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.808534 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.808547 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.811347 4896 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d" exitCode=0 Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.811414 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d"} Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.811444 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.811615 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812649 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812741 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812776 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812906 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:07 crc kubenswrapper[4896]: I0126 15:34:07.812926 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.596557 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:29:03.032847497 +0000 UTC Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.819311 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.819399 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.819671 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4bb4b26156eebe104fa7d48c28ed4a08235b86559e08a00f0ad0309dbe50b33c"} Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.819731 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"25fbe9a2849497daf60146732051caa58ee0bea6d8f1cc7c9997290c5e382c9b"} Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.820765 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.820828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.820853 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.902118 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.961854 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.962087 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.963657 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.963724 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:08 crc kubenswrapper[4896]: I0126 15:34:08.963749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.597376 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:52:59.000043243 +0000 UTC Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.657801 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826005 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13a8a120178ee8138e55bda65d5961982be475b4869c84dd87ccbbb6323ce323"} Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826061 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826065 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85ad7e201f7fe5178266b227227936ded00706faac9aed3a761171442dde253a"} Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826189 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"03cf245537deb1adf1d9428c2540f5d05fd11fc83b7bbb7e3d589ccbe72a403e"} Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826211 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826731 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826908 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826939 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.826948 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827805 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827814 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827831 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827812 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.827926 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:09 crc kubenswrapper[4896]: I0126 15:34:09.880337 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 15:34:10 crc kubenswrapper[4896]: I0126 15:34:10.598042 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:54:18.428288563 +0000 UTC Jan 26 15:34:10 crc kubenswrapper[4896]: I0126 15:34:10.828160 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:10 crc kubenswrapper[4896]: I0126 15:34:10.831741 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:10 crc kubenswrapper[4896]: I0126 15:34:10.831865 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:10 crc kubenswrapper[4896]: I0126 15:34:10.831889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.347359 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.598908 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:57:05.143293042 +0000 UTC Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.830831 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.831785 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.831821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:11 crc kubenswrapper[4896]: I0126 15:34:11.831831 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.111137 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.111394 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.113302 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.113361 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.113375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.116430 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.235908 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.236149 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.237492 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.237561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.237622 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.261527 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.451831 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.599075 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:52:32.271159603 +0000 UTC Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.833565 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.833721 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.833751 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.834523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.834553 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.834562 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835428 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835468 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835427 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835524 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:12 crc kubenswrapper[4896]: I0126 15:34:12.835550 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4896]: E0126 15:34:13.518095 4896 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.600197 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:37:22.342831966 +0000 UTC Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.660686 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.836376 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.836409 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.837900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.837974 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.837998 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.838146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.838219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:13 crc kubenswrapper[4896]: I0126 15:34:13.838233 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:14 crc kubenswrapper[4896]: I0126 15:34:14.600926 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:39:45.464901436 +0000 UTC Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.489827 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.490269 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.491807 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.491877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.491901 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:15 crc kubenswrapper[4896]: I0126 15:34:15.601280 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:55:22.898524762 +0000 UTC Jan 26 15:34:16 crc kubenswrapper[4896]: I0126 15:34:16.601998 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:31:33.20915866 +0000 UTC Jan 26 15:34:16 crc kubenswrapper[4896]: I0126 15:34:16.661259 4896 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 15:34:16 crc kubenswrapper[4896]: I0126 15:34:16.661502 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:34:17 crc kubenswrapper[4896]: I0126 15:34:17.578022 4896 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 15:34:17 crc kubenswrapper[4896]: I0126 15:34:17.578093 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 15:34:17 crc kubenswrapper[4896]: I0126 15:34:17.590678 4896 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 15:34:17 crc kubenswrapper[4896]: I0126 15:34:17.590757 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 15:34:17 crc kubenswrapper[4896]: I0126 15:34:17.603497 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:09:23.855484711 +0000 UTC Jan 26 15:34:18 crc kubenswrapper[4896]: I0126 15:34:18.608565 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:02:14.448268011 +0000 UTC Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.609399 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:20:40.363261561 +0000 UTC Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.662500 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.662658 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.663862 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.664500 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:19 crc kubenswrapper[4896]: I0126 15:34:19.664549 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:20 crc kubenswrapper[4896]: I0126 15:34:20.610251 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:52:39.641094264 +0000 UTC Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.279108 4896 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.279188 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.380294 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.380535 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.381997 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.382082 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.382160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.396449 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.610518 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:36:39.415374596 +0000 UTC Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.857258 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.858331 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.858412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:21 crc kubenswrapper[4896]: I0126 15:34:21.858451 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.237105 4896 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.237184 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.266194 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.266432 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.266697 4896 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.266751 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.267832 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.267894 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.267908 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.272309 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:22 crc kubenswrapper[4896]: E0126 15:34:22.669874 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.669968 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:16:25.13214664 +0000 UTC Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.688685 4896 trace.go:236] Trace[1962806060]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:34:10.800) (total time: 11888ms): Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[1962806060]: ---"Objects listed" error: 11888ms (15:34:22.688) Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[1962806060]: [11.888418558s] [11.888418558s] END Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.688925 4896 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.689141 4896 trace.go:236] Trace[178538227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:34:11.318) (total time: 11370ms): Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[178538227]: ---"Objects listed" error: 11370ms (15:34:22.689) Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[178538227]: [11.37045556s] [11.37045556s] END Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.689258 4896 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:22 crc kubenswrapper[4896]: E0126 15:34:22.689469 4896 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.690200 4896 trace.go:236] Trace[31719567]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:34:09.836) (total time: 12853ms): Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[31719567]: ---"Objects listed" error: 12853ms (15:34:22.690) Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[31719567]: [12.853720097s] [12.853720097s] END Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.690334 4896 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.695266 4896 trace.go:236] Trace[1065666975]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 15:34:12.646) (total time: 10048ms): Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[1065666975]: ---"Objects listed" error: 10048ms (15:34:22.695) Jan 26 15:34:22 crc kubenswrapper[4896]: Trace[1065666975]: [10.048836119s] [10.048836119s] END Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.695297 4896 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.704156 4896 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.711967 4896 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.749887 4896 csr.go:261] certificate signing request csr-tllk6 is approved, waiting to be issued Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.756377 4896 csr.go:257] certificate signing request csr-tllk6 is issued Jan 26 15:34:22 crc kubenswrapper[4896]: I0126 15:34:22.977131 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.574690 4896 apiserver.go:52] "Watching apiserver" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.577694 4896 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.577998 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.578317 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.578341 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.578371 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.578498 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.578525 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.579008 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.579530 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.579631 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.579868 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.580899 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.581510 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582020 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582182 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582313 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582495 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582678 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.582726 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.583239 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.591929 4896 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611008 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611057 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611084 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611105 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611127 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611148 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611170 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611190 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611211 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611235 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611258 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611278 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611297 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611318 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611338 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611357 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611378 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611425 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611447 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611476 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611498 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611526 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611560 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611620 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611651 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611672 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611729 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611750 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611771 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611791 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611812 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611815 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611863 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611875 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611886 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611909 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611958 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.611982 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612012 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612060 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612082 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612102 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612103 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612124 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612144 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612168 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612190 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612211 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612210 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612236 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612282 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612708 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612805 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612928 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612960 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613099 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613108 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613175 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613239 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613276 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613435 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613430 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613479 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613495 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613523 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613724 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613800 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613822 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.613995 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.614111 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.614161 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.614865 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615013 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615020 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615036 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615188 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615265 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615287 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615507 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.612231 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615649 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615688 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615719 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615737 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615750 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615765 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615809 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615820 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615859 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.615920 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616016 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616049 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616078 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616112 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616143 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616181 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616191 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616210 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616242 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616271 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616301 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616331 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616364 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616397 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616431 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616463 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616492 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616521 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616551 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616564 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616609 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616642 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616672 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616703 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616733 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616794 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616826 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616856 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616886 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616918 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616949 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.616981 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617014 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617048 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617081 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617114 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617147 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617180 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617213 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617247 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617293 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617326 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617359 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617390 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617422 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617454 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617484 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617516 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617546 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617607 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617641 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617670 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617753 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617788 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617887 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617926 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617961 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617997 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617009 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.619386 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617422 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617480 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617549 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.619720 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617550 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.619749 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617727 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.619766 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617754 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618464 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618500 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618640 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618835 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618766 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618878 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.618943 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.619386 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.617235 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620024 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620189 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620313 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620338 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620391 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620646 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620808 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.620839 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621301 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621449 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621850 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621871 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621928 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621965 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.621998 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622022 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622030 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622079 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622089 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622107 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622131 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622191 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622214 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622241 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622343 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622369 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622393 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622414 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622428 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622437 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622459 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622492 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622517 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622538 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622559 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622598 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622763 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622994 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623153 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623252 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623292 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623478 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623700 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623899 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.623940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.624291 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.624791 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.625031 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.625277 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.625568 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.625648 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.626076 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.626186 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.626236 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.627684 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.627954 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.628318 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.629136 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.629170 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.629502 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.629698 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.629912 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630200 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630233 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630253 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630420 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630546 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630670 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630770 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.630961 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631073 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631286 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631484 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631573 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.622626 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631723 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631751 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631795 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631856 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631878 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631887 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631916 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631945 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.631970 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632002 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632030 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632054 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632077 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632147 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632174 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632202 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632237 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632267 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632291 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632299 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632312 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632335 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632359 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.632371 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:24.132355122 +0000 UTC m=+21.914235515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632389 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632408 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632439 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632454 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632472 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632488 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632516 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632532 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632547 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632565 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632605 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632634 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632650 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632666 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632682 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632699 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632725 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632740 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632755 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632701 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632770 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632811 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632842 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632859 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632874 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632891 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632906 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632900 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632922 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.632940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633000 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633037 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633081 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633112 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633116 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633250 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633401 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633424 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633438 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633487 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633513 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633571 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633622 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633628 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633651 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633768 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633797 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633846 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633872 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633897 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.633945 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634014 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634024 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634055 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634104 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634139 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634159 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634168 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634188 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634220 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634230 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634249 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634275 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634297 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634320 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634343 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634367 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634390 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634543 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634559 4896 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634593 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634608 4896 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634620 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634631 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634642 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634652 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634665 4896 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634675 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634685 4896 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634694 4896 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634705 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634720 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634732 4896 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634742 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634752 4896 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634777 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634790 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634801 4896 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634812 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634824 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634836 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634848 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634859 4896 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634870 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634882 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634893 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634903 4896 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634914 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634925 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634938 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634951 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634963 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634979 4896 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634991 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635005 4896 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635017 4896 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635029 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635041 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635053 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635065 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635076 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635089 4896 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635106 4896 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635118 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635129 4896 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635142 4896 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635153 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635165 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635176 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635188 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635199 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635210 4896 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635221 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635231 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635243 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635254 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635265 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635276 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635287 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635299 4896 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635312 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635325 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635336 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635349 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635364 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635376 4896 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635388 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635400 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635411 4896 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635423 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635434 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635445 4896 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635455 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635468 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635482 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635495 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635506 4896 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635516 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635528 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635540 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635551 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635565 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635594 4896 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635607 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635620 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635633 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635645 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635656 4896 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635668 4896 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635679 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635690 4896 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635702 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635712 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635731 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635743 4896 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635756 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635773 4896 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635785 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635797 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635809 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635821 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635832 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635845 4896 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635856 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635867 4896 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635879 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635891 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635902 4896 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635916 4896 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635929 4896 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635940 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635952 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635963 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635975 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635987 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635999 4896 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636010 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636021 4896 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636031 4896 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636042 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636054 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636066 4896 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636079 4896 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636090 4896 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636435 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639251 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.634294 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635463 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635802 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.635713 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636931 4896 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641976 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.643799 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636001 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636057 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.636549 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.637115 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.637442 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.637492 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.637731 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.637979 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.638028 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.638356 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.638451 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.644868 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.644888 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:24.144733049 +0000 UTC m=+21.926613432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.638784 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.638837 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639074 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639104 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639313 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.639695 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.644990 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:24.144980676 +0000 UTC m=+21.926861139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639781 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.639829 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.640040 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.640072 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.640341 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.640445 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641131 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641478 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641664 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641664 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.641683 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.642040 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.642466 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.642612 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.642855 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.642568 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.643107 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.643521 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.645401 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.645555 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.646833 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.646315 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.648470 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.649946 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.651914 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.652228 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.653895 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.655019 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.659942 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.659977 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.659993 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.660060 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:24.160038687 +0000 UTC m=+21.941919160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.660552 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.660682 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.661164 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.661216 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.661799 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.661889 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.662940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.667124 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.669114 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.669538 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.670223 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.670261 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:32:05.115737447 +0000 UTC Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.671737 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.671788 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.671826 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.671900 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:24.171873523 +0000 UTC m=+21.953753986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.672548 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.673722 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.676642 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.676897 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.676954 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.677716 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.680089 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.680953 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.681886 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.682667 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.682760 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.683059 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.687449 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.687731 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.690087 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.692780 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.693316 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.697814 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.700106 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.701796 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.711547 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.725450 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.738829 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.738938 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.738978 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.738987 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.738996 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.739006 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.739018 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.739053 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.739188 4896 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.739890 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740027 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740098 4896 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740111 4896 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740120 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740128 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740137 4896 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740145 4896 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740153 4896 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740161 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740170 4896 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740179 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740188 4896 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740196 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740206 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740214 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740221 4896 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740229 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740237 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740245 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740254 4896 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740263 4896 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740273 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740284 4896 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740295 4896 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740306 4896 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740315 4896 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740322 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740330 4896 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740338 4896 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740347 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740355 4896 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740363 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740371 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740382 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740392 4896 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740403 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740413 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740423 4896 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740431 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740439 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740448 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740456 4896 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740465 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740473 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740482 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740491 4896 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740500 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740508 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740517 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740527 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740536 4896 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740544 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740553 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740562 4896 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740570 4896 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740600 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740616 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740630 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740639 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740648 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740657 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740666 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.740674 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.752143 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.758188 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 15:29:22 +0000 UTC, rotation deadline is 2026-10-13 12:01:27.675622662 +0000 UTC Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.758233 4896 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6236h27m3.917392123s for next certificate rotation Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.762949 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.773224 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.782897 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.794075 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.805740 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.817196 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.829200 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 15:34:23 crc kubenswrapper[4896]: E0126 15:34:23.867199 4896 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.891382 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.903135 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 15:34:23 crc kubenswrapper[4896]: I0126 15:34:23.909050 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 15:34:23 crc kubenswrapper[4896]: W0126 15:34:23.911258 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-58f9f98dc99644228464b98ea63bb0c0a4a861a63a51582ea1148620e12fe87f WatchSource:0}: Error finding container 58f9f98dc99644228464b98ea63bb0c0a4a861a63a51582ea1148620e12fe87f: Status 404 returned error can't find the container with id 58f9f98dc99644228464b98ea63bb0c0a4a861a63a51582ea1148620e12fe87f Jan 26 15:34:23 crc kubenswrapper[4896]: W0126 15:34:23.918140 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-31c6ea92dfc623748a1fcf9ff8a17cb7fcf80c7a6437a5d656fb263ea44dc21f WatchSource:0}: Error finding container 31c6ea92dfc623748a1fcf9ff8a17cb7fcf80c7a6437a5d656fb263ea44dc21f: Status 404 returned error can't find the container with id 31c6ea92dfc623748a1fcf9ff8a17cb7fcf80c7a6437a5d656fb263ea44dc21f Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.143664 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.143895 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.143864175 +0000 UTC m=+22.925744578 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.244882 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.244945 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.244967 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.245009 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245121 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245184 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.245171592 +0000 UTC m=+23.027051985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245238 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245255 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245262 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245273 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245291 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245321 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245332 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.245324925 +0000 UTC m=+23.027205318 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245266 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245348 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.245340536 +0000 UTC m=+23.027220929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.245359 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:25.245354386 +0000 UTC m=+23.027234779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.670908 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:28:01.983537442 +0000 UTC Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.692256 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6scjz"] Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.692806 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.695250 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.695763 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.695989 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.748982 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmth\" (UniqueName: \"kubernetes.io/projected/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-kube-api-access-8hmth\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.749048 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-hosts-file\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.761545 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.761678 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.761743 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:24 crc kubenswrapper[4896]: E0126 15:34:24.761798 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.762062 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.763364 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.764047 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.765488 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.766388 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.767680 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.768488 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.769333 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.770543 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.771336 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.777878 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.778461 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.781921 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.782533 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.783900 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.785110 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.786046 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.787541 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.788187 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.789038 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.792516 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.793497 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.794819 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.795515 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.797251 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.797830 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.798552 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.804514 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.804988 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.805550 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.805905 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.806011 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.806430 4896 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.806527 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.807784 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.808373 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.808764 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.809839 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.810419 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.813180 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.813874 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.815019 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.815464 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.816725 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.817347 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.819170 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.819991 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.821297 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.821916 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.822848 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.823366 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.823874 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.824109 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.824334 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.824839 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.825439 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.827571 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.836171 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.846781 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.849487 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmth\" (UniqueName: \"kubernetes.io/projected/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-kube-api-access-8hmth\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.849529 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-hosts-file\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.849606 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-hosts-file\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.859542 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.868896 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"aada299986480ba3bfa9aa9cfe46bd872ed0a103e9ea37702ded749f32d20db6"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.870548 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmth\" (UniqueName: \"kubernetes.io/projected/afbe83ed-0fcd-48ca-b184-7c0fb7fda819-kube-api-access-8hmth\") pod \"node-resolver-6scjz\" (UID: \"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\") " pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.871414 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.871482 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.871494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31c6ea92dfc623748a1fcf9ff8a17cb7fcf80c7a6437a5d656fb263ea44dc21f"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.873198 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.873234 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"58f9f98dc99644228464b98ea63bb0c0a4a861a63a51582ea1148620e12fe87f"} Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.877505 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.892441 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.910557 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.923286 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.935236 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.947001 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.958141 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.968727 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.980624 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:24 crc kubenswrapper[4896]: I0126 15:34:24.996013 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.012478 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6scjz" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.072479 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: W0126 15:34:25.087617 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbe83ed_0fcd_48ca_b184_7c0fb7fda819.slice/crio-64a5f9b9237297f3c422f6a290b79e03569c2a9f064a2eb81fe73e5e42ab6317 WatchSource:0}: Error finding container 64a5f9b9237297f3c422f6a290b79e03569c2a9f064a2eb81fe73e5e42ab6317: Status 404 returned error can't find the container with id 64a5f9b9237297f3c422f6a290b79e03569c2a9f064a2eb81fe73e5e42ab6317 Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.094367 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.151932 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.152082 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:27.152056806 +0000 UTC m=+24.933937199 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.252537 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.252623 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.252660 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.252698 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252710 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252736 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252736 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252749 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252770 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252795 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:27.252778909 +0000 UTC m=+25.034659302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252818 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252823 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:27.252806459 +0000 UTC m=+25.034686852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252840 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:27.25283455 +0000 UTC m=+25.034714943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252841 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252858 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.252913 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:27.252897811 +0000 UTC m=+25.034778214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.488250 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9nd8b"] Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.488890 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.490658 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.494422 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.494996 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nrqhw"] Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.495301 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.496170 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.504283 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.505182 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.505852 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.509368 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.509546 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hw55b"] Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.510323 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.511956 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.511985 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdszn"] Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.512983 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.521864 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534012 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534279 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534429 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534654 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534811 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.534995 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.535100 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.535204 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.535619 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.549762 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.554575 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-kubelet\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.554635 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cnibin\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.554717 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.554985 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qnn\" (UniqueName: \"kubernetes.io/projected/17760139-6c26-4a89-a7ab-4e6a3d2cc516-kube-api-access-q9qnn\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555030 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlc2j\" (UniqueName: \"kubernetes.io/projected/0eae0e2b-9d04-4999-b78c-c70aeee09235-kube-api-access-rlc2j\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555059 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-binary-copy\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555095 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-os-release\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555136 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-daemon-config\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555164 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae0e2b-9d04-4999-b78c-c70aeee09235-mcd-auth-proxy-config\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555219 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv4gq\" (UniqueName: \"kubernetes.io/projected/8c4023ce-9d03-491a-bbc6-d5afffb92f34-kube-api-access-nv4gq\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555255 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555289 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cnibin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555322 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-netns\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555366 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-etc-kubernetes\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555409 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-k8s-cni-cncf-io\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555446 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-socket-dir-parent\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555472 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-conf-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555500 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0eae0e2b-9d04-4999-b78c-c70aeee09235-rootfs\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-os-release\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555589 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-bin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555614 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-multus-certs\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555636 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-system-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555656 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-multus\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555692 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0eae0e2b-9d04-4999-b78c-c70aeee09235-proxy-tls\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555723 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-system-cni-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555763 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cni-binary-copy\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555787 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-hostroot\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.555807 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.596376 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.627541 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.644002 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656146 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656354 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656394 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cni-binary-copy\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656417 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656436 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656460 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-hostroot\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656483 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656504 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656527 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-kubelet\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656543 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-hostroot\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656548 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cnibin\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656569 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656593 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-kubelet\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656615 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656623 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cnibin\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656662 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qnn\" (UniqueName: \"kubernetes.io/projected/17760139-6c26-4a89-a7ab-4e6a3d2cc516-kube-api-access-q9qnn\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656708 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656726 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656747 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlc2j\" (UniqueName: \"kubernetes.io/projected/0eae0e2b-9d04-4999-b78c-c70aeee09235-kube-api-access-rlc2j\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656764 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-binary-copy\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656780 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656801 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656817 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-os-release\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656836 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-daemon-config\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656896 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656907 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656955 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.656979 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae0e2b-9d04-4999-b78c-c70aeee09235-mcd-auth-proxy-config\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657000 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657022 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv4gq\" (UniqueName: \"kubernetes.io/projected/8c4023ce-9d03-491a-bbc6-d5afffb92f34-kube-api-access-nv4gq\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657047 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657065 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657081 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657106 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cnibin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657120 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-netns\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657134 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-etc-kubernetes\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657151 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657166 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jvk\" (UniqueName: \"kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657180 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cni-binary-copy\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657198 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-k8s-cni-cncf-io\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657221 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-k8s-cni-cncf-io\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657226 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657230 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-os-release\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657251 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657280 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-etc-kubernetes\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657288 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-cnibin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657300 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-netns\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657310 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657335 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-socket-dir-parent\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657351 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-conf-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657367 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0eae0e2b-9d04-4999-b78c-c70aeee09235-rootfs\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657372 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-daemon-config\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657386 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657401 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-os-release\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657409 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0eae0e2b-9d04-4999-b78c-c70aeee09235-rootfs\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657436 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-socket-dir-parent\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657445 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-multus-conf-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657475 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-bin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657489 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-os-release\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657497 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-bin\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657520 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-multus-certs\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657551 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-system-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657592 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-multus\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657620 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0eae0e2b-9d04-4999-b78c-c70aeee09235-proxy-tls\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657643 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-var-lib-cni-multus\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657643 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-system-cni-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657672 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/17760139-6c26-4a89-a7ab-4e6a3d2cc516-system-cni-dir\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657682 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-system-cni-dir\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657675 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657728 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae0e2b-9d04-4999-b78c-c70aeee09235-mcd-auth-proxy-config\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657624 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8c4023ce-9d03-491a-bbc6-d5afffb92f34-host-run-multus-certs\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.657956 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.658107 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/17760139-6c26-4a89-a7ab-4e6a3d2cc516-cni-binary-copy\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.667440 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0eae0e2b-9d04-4999-b78c-c70aeee09235-proxy-tls\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.671317 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:16:22.932900997 +0000 UTC Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.675672 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlc2j\" (UniqueName: \"kubernetes.io/projected/0eae0e2b-9d04-4999-b78c-c70aeee09235-kube-api-access-rlc2j\") pod \"machine-config-daemon-nrqhw\" (UID: \"0eae0e2b-9d04-4999-b78c-c70aeee09235\") " pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.679932 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv4gq\" (UniqueName: \"kubernetes.io/projected/8c4023ce-9d03-491a-bbc6-d5afffb92f34-kube-api-access-nv4gq\") pod \"multus-9nd8b\" (UID: \"8c4023ce-9d03-491a-bbc6-d5afffb92f34\") " pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.681773 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qnn\" (UniqueName: \"kubernetes.io/projected/17760139-6c26-4a89-a7ab-4e6a3d2cc516-kube-api-access-q9qnn\") pod \"multus-additional-cni-plugins-hw55b\" (UID: \"17760139-6c26-4a89-a7ab-4e6a3d2cc516\") " pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.688281 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.699233 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.710852 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.722325 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.733907 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.745399 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758300 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758401 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758431 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: E0126 15:34:25.758431 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758448 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758477 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758492 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758508 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758522 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758544 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758559 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758593 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758608 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758622 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758636 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758650 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758665 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758685 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jvk\" (UniqueName: \"kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758698 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758712 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758728 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758772 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758806 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758847 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758881 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758914 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758943 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.758972 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759219 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759252 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759272 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759293 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759504 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759524 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759677 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759703 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.759727 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.760185 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.760327 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.762614 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.767856 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.774202 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jvk\" (UniqueName: \"kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk\") pod \"ovnkube-node-gdszn\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.780683 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.790734 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.801938 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.802417 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nd8b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.809169 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.812799 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: W0126 15:34:25.814719 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c4023ce_9d03_491a_bbc6_d5afffb92f34.slice/crio-4832ff901ca20b9eb2bb1fb71353f6de9d247d05d7ea2a2f1943a796b16eae95 WatchSource:0}: Error finding container 4832ff901ca20b9eb2bb1fb71353f6de9d247d05d7ea2a2f1943a796b16eae95: Status 404 returned error can't find the container with id 4832ff901ca20b9eb2bb1fb71353f6de9d247d05d7ea2a2f1943a796b16eae95 Jan 26 15:34:25 crc kubenswrapper[4896]: W0126 15:34:25.823163 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eae0e2b_9d04_4999_b78c_c70aeee09235.slice/crio-cbb191fbaf2783b42a3634be4892f5dff0b56f73a65de8505fec9d3875b2d795 WatchSource:0}: Error finding container cbb191fbaf2783b42a3634be4892f5dff0b56f73a65de8505fec9d3875b2d795: Status 404 returned error can't find the container with id cbb191fbaf2783b42a3634be4892f5dff0b56f73a65de8505fec9d3875b2d795 Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.826009 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hw55b" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.826007 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.834212 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.840744 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.856177 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.875693 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.877383 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6scjz" event={"ID":"afbe83ed-0fcd-48ca-b184-7c0fb7fda819","Type":"ContainerStarted","Data":"4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.877434 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6scjz" event={"ID":"afbe83ed-0fcd-48ca-b184-7c0fb7fda819","Type":"ContainerStarted","Data":"64a5f9b9237297f3c422f6a290b79e03569c2a9f064a2eb81fe73e5e42ab6317"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.892307 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"cbb191fbaf2783b42a3634be4892f5dff0b56f73a65de8505fec9d3875b2d795"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.897166 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"b69775f7b723ba75176fb53988b385d82913cef3d27db601904ac4035de2ee74"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.901959 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerStarted","Data":"3b30c0707d8de6e3ca2155ba9a705cbe05fefe1fbbd6a78466d11a1ddc634d95"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.902902 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerStarted","Data":"4832ff901ca20b9eb2bb1fb71353f6de9d247d05d7ea2a2f1943a796b16eae95"} Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.940902 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:25 crc kubenswrapper[4896]: I0126 15:34:25.957259 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.021802 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:25Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.039168 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.058480 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.072539 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.085286 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.105305 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.121900 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.137214 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.148668 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.169299 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.181354 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.193763 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.204984 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.212569 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.223302 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.672483 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:18:52.830230986 +0000 UTC Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.761174 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:26 crc kubenswrapper[4896]: E0126 15:34:26.761299 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.762099 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:26 crc kubenswrapper[4896]: E0126 15:34:26.762160 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.907925 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.907979 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.908983 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.910205 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8" exitCode=0 Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.910247 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.912352 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b" exitCode=0 Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.912422 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.913770 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerStarted","Data":"b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed"} Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.932099 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.945316 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.964650 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.978300 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:26 crc kubenswrapper[4896]: I0126 15:34:26.990393 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:26Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.003708 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.019357 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.034407 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.047099 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.066805 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.081135 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.093040 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.108868 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.121494 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.133891 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.146443 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.174367 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.176785 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.177545 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:31.177524916 +0000 UTC m=+28.959405309 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.195705 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.208172 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.222174 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.241119 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.263207 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.278128 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.278495 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.278529 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.278618 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278798 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278840 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278849 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278891 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278904 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:31.278880603 +0000 UTC m=+29.060761046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278908 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278961 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:31.278944105 +0000 UTC m=+29.060824608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278859 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.278988 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.279017 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:31.279009436 +0000 UTC m=+29.060889919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.279234 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.279361 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:31.279338924 +0000 UTC m=+29.061219307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.281497 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.352976 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.395604 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.408695 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.470832 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bzzr5"] Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.471154 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.473295 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.473535 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.474717 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.475089 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.484999 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.495740 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.507600 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.525961 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.541864 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.553152 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.566358 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.580638 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76f90dd1-9706-47ef-b243-e24f185d0340-serviceca\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.580923 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76f90dd1-9706-47ef-b243-e24f185d0340-host\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.581130 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2bb\" (UniqueName: \"kubernetes.io/projected/76f90dd1-9706-47ef-b243-e24f185d0340-kube-api-access-hr2bb\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.581100 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.592946 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.604500 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.616443 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.626252 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.641592 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.654128 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.673272 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:58:42.168040414 +0000 UTC Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.681794 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2bb\" (UniqueName: \"kubernetes.io/projected/76f90dd1-9706-47ef-b243-e24f185d0340-kube-api-access-hr2bb\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.681948 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76f90dd1-9706-47ef-b243-e24f185d0340-serviceca\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.682112 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76f90dd1-9706-47ef-b243-e24f185d0340-host\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.682248 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76f90dd1-9706-47ef-b243-e24f185d0340-host\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.683417 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76f90dd1-9706-47ef-b243-e24f185d0340-serviceca\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.701485 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2bb\" (UniqueName: \"kubernetes.io/projected/76f90dd1-9706-47ef-b243-e24f185d0340-kube-api-access-hr2bb\") pod \"node-ca-bzzr5\" (UID: \"76f90dd1-9706-47ef-b243-e24f185d0340\") " pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.758607 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:27 crc kubenswrapper[4896]: E0126 15:34:27.758744 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.812600 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bzzr5" Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.960505 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91"} Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.960555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803"} Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.967084 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153" exitCode=0 Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.967137 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153"} Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.969874 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bzzr5" event={"ID":"76f90dd1-9706-47ef-b243-e24f185d0340","Type":"ContainerStarted","Data":"e8673101f6616af486766fae06b75f9213cfb470c3918b9feab03c393bb67793"} Jan 26 15:34:27 crc kubenswrapper[4896]: I0126 15:34:27.983660 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.000087 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.013303 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.025307 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.035916 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.048664 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.062350 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.076567 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.096087 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.107751 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.125585 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.150782 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.159896 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.178823 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.675258 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:43:04.061698716 +0000 UTC Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.759137 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.759162 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:28 crc kubenswrapper[4896]: E0126 15:34:28.759267 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:28 crc kubenswrapper[4896]: E0126 15:34:28.759417 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.975059 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8" exitCode=0 Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.975116 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.976908 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bzzr5" event={"ID":"76f90dd1-9706-47ef-b243-e24f185d0340","Type":"ContainerStarted","Data":"490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.981661 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.981713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.981733 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.981748 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195"} Jan 26 15:34:28 crc kubenswrapper[4896]: I0126 15:34:28.989453 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:28Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.004492 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.016969 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.036779 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.053699 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.065915 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.079200 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.090386 4896 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.091531 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.092698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.093121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.093170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.093290 4896 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.100086 4896 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.100375 4896 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101533 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101572 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101600 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.101922 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.115422 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.118084 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.121167 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.121198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.121209 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.121232 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.121244 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.127618 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.131700 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.134769 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.134796 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.134807 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.134822 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.134834 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.141121 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.148850 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.152364 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.152393 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.152402 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.152417 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.152427 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.158425 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.161811 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.164069 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.164096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.164104 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.164116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.164125 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.169325 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.175467 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.175648 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.177362 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.177394 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.177403 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.177417 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.177428 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.181266 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.192611 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.202643 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.213194 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.223119 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.233193 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.246020 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.258942 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.271717 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.280029 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.280078 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.280090 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.280105 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.280118 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.283884 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.294933 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.311609 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.326535 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.336191 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:29Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.382263 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.382302 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.382313 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.382328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.382339 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.484931 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.484992 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.485009 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.485033 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.485053 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.586786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.586816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.586825 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.586837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.586845 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.675708 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:22:19.914212162 +0000 UTC Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.690266 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.690291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.690300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.690314 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.690322 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.759131 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:29 crc kubenswrapper[4896]: E0126 15:34:29.759259 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.792598 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.792668 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.792682 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.792698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.792710 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.895136 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.895194 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.895210 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.895236 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.895253 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.987822 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7" exitCode=0 Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.987888 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7"} Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.997411 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.997471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.997488 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.997512 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:29 crc kubenswrapper[4896]: I0126 15:34:29.997530 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:29Z","lastTransitionTime":"2026-01-26T15:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.003027 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.027828 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.067316 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.085962 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.098406 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.099954 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.099983 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.099995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.100011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.100022 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.112395 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.125619 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.147242 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.161510 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.172923 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.185711 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.195120 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.202185 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.202215 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.202226 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.202243 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.202254 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.209994 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.222040 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.306062 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.306111 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.306126 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.306147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.306162 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.410106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.410173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.410190 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.410218 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.410237 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.512866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.512907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.512918 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.512933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.512947 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.616485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.616550 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.616566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.616619 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.616638 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.676180 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:36:22.74379004 +0000 UTC Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.719443 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.719492 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.719505 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.719525 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.719539 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.758416 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.758428 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:30 crc kubenswrapper[4896]: E0126 15:34:30.758625 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:30 crc kubenswrapper[4896]: E0126 15:34:30.759299 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.822926 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.822975 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.822991 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.823010 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.823021 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.926257 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.926309 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.926319 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.926334 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.926344 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:30Z","lastTransitionTime":"2026-01-26T15:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.994721 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1"} Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.997338 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c" exitCode=0 Jan 26 15:34:30 crc kubenswrapper[4896]: I0126 15:34:30.997380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.016944 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.029020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.029067 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.029079 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.029103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.029118 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.033124 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.046329 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.059220 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.072565 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.086125 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.098002 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.112070 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.126465 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.131857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.131898 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.131908 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.131924 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.131934 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.140090 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.151505 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.172146 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.186978 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.198672 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:31Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.234568 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.234674 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.234695 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.234720 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.234738 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.264568 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.264917 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:39.264891169 +0000 UTC m=+37.046771562 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.337214 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.337265 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.337278 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.337298 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.337313 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.365056 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.365099 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.365123 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.365142 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365209 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365258 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:39.365242292 +0000 UTC m=+37.147122685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365371 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365494 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:39.365465638 +0000 UTC m=+37.147346071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365499 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365526 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365542 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365598 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:39.365565351 +0000 UTC m=+37.147445824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365499 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365643 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365664 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.365728 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:39.365713684 +0000 UTC m=+37.147594107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.440510 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.440559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.440567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.440606 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.440615 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.544166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.544286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.544308 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.544331 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.544347 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.647423 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.647467 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.647475 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.647494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.647508 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.676804 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:39:53.497660828 +0000 UTC Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.750686 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.750737 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.750750 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.750767 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.750780 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.759034 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:31 crc kubenswrapper[4896]: E0126 15:34:31.759201 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.854905 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.854985 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.855006 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.855042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.855067 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.958270 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.958570 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.958617 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.958634 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:31 crc kubenswrapper[4896]: I0126 15:34:31.958645 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:31Z","lastTransitionTime":"2026-01-26T15:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.003890 4896 generic.go:334] "Generic (PLEG): container finished" podID="17760139-6c26-4a89-a7ab-4e6a3d2cc516" containerID="2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e" exitCode=0 Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.003939 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerDied","Data":"2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.019106 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.033985 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.046798 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.059413 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.060957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.060983 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.060991 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.061005 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.061018 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.071353 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.088191 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.102938 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.117298 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.131954 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.151182 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.163164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.163197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.163206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.163221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.163232 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.165898 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.179158 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.195009 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.206922 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.265034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.265082 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.265101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.265123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.265140 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.367821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.367868 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.367882 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.367903 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.367920 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.470105 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.470150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.470164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.470189 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.470201 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.481602 4896 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.573503 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.573545 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.573556 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.573570 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.573620 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.676703 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.676745 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.676757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.676773 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.676783 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.677140 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:14:14.209170233 +0000 UTC Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.759126 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.759197 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:32 crc kubenswrapper[4896]: E0126 15:34:32.759286 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:32 crc kubenswrapper[4896]: E0126 15:34:32.759488 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.771237 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.779086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.779116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.779124 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.779139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.779150 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.785441 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.798703 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.822154 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.839115 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.852499 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.870729 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.881722 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.881770 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.881786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.881806 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.881822 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.886287 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.903059 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.918729 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.934320 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.947345 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.959364 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.976528 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.984338 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.984374 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.984384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.984399 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:32 crc kubenswrapper[4896]: I0126 15:34:32.984410 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:32Z","lastTransitionTime":"2026-01-26T15:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.012354 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" event={"ID":"17760139-6c26-4a89-a7ab-4e6a3d2cc516","Type":"ContainerStarted","Data":"3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.029893 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.042685 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.059752 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.076222 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.086228 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.086264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.086274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.086292 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.086305 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.090286 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.107441 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.125192 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.138987 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.151955 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.163695 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.176769 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.187882 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.188664 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.188690 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.188698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.188711 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.188720 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.202791 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.215221 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.291360 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.291399 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.291408 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.291423 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.291434 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.394085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.394144 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.394164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.394196 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.394217 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.496761 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.496807 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.496821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.496838 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.496849 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.598978 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.599297 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.599309 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.599326 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.599338 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.677593 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:47:45.329188228 +0000 UTC Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.701941 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.701968 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.701977 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.701989 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.701997 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.758529 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:33 crc kubenswrapper[4896]: E0126 15:34:33.758755 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.806462 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.806518 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.806532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.806553 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.806574 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.908743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.908806 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.908829 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.908857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:33 crc kubenswrapper[4896]: I0126 15:34:33.908877 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:33Z","lastTransitionTime":"2026-01-26T15:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.011097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.011190 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.011226 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.011259 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.011282 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.024012 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.047014 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.061652 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.081304 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.098338 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.113908 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.113979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.113993 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.114015 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.114029 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.119026 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.133361 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.147299 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.164936 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.176658 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.190327 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.207472 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.216987 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.217203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.217324 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.217451 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.217560 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.221699 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.240069 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.267257 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:34Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.320442 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.320471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.320481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.320494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.320502 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.422397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.422465 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.422476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.422491 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.422500 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.524794 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.524837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.524849 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.524866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.524879 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.627848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.627905 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.627925 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.627947 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.627963 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.678233 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:46:38.41802644 +0000 UTC Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.731000 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.731086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.731097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.731114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.731126 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.760063 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.760135 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:34 crc kubenswrapper[4896]: E0126 15:34:34.760251 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:34 crc kubenswrapper[4896]: E0126 15:34:34.760418 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.834083 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.834155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.834170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.834193 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.834208 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.936647 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.936697 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.936709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.936726 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:34 crc kubenswrapper[4896]: I0126 15:34:34.936738 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:34Z","lastTransitionTime":"2026-01-26T15:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.026599 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.026991 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.027062 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.038965 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.038995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.039003 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.039016 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.039025 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.051969 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.058882 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.066089 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.077729 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.088800 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.100287 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.110629 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.123637 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.134772 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.142020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.142064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.142079 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.142098 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.142111 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.148103 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.159435 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.176877 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.188753 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.199715 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.213886 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.221875 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.233359 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.244350 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.244400 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.244409 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.244424 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.244433 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.246146 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.256820 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.267681 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.277640 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.288857 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.300286 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.308761 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.321459 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.332791 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.342539 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.346996 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.347035 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.347044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.347057 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.347066 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.363710 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.381261 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.392958 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:35Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.451774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.451821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.451832 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.451848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.451859 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.554726 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.554767 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.554781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.554799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.554812 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.656786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.657067 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.657191 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.657316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.657430 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.679550 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:40:28.893604416 +0000 UTC Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.758391 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:35 crc kubenswrapper[4896]: E0126 15:34:35.758511 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.759877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.760020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.760115 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.760217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.760293 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.862749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.862785 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.862794 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.862809 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.862818 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.965688 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.965736 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.965752 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.965774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:35 crc kubenswrapper[4896]: I0126 15:34:35.965792 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:35Z","lastTransitionTime":"2026-01-26T15:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.028956 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.068222 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.068260 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.068274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.068293 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.068305 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.170998 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.171034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.171043 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.171077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.171086 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.273446 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.273734 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.273895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.274020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.274137 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.377028 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.377085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.377102 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.377125 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.377142 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.480204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.480261 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.480282 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.480309 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.480327 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.583500 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.583620 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.583645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.583675 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.583696 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.679868 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:33:46.568590085 +0000 UTC Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.686265 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.686432 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.686492 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.686552 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.686633 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.759257 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:36 crc kubenswrapper[4896]: E0126 15:34:36.759372 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.759675 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:36 crc kubenswrapper[4896]: E0126 15:34:36.759941 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.788652 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.788691 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.788705 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.788723 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.788736 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.891849 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.891897 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.891908 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.891925 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.891935 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.994892 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.994933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.994942 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.994957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:36 crc kubenswrapper[4896]: I0126 15:34:36.994966 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:36Z","lastTransitionTime":"2026-01-26T15:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.040801 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/0.log" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.044821 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d" exitCode=1 Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.044885 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.046229 4896 scope.go:117] "RemoveContainer" containerID="0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.060337 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.074297 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.088947 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.096958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.097024 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.097035 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.097052 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.097066 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.100236 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.112489 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.124057 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.136457 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.146796 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.156825 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.159198 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq"] Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.159567 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.161184 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.162151 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.175825 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.199954 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.199996 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.200027 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.200052 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.200063 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.213537 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.222748 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.222793 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.222811 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.222837 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hklx\" (UniqueName: \"kubernetes.io/projected/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-kube-api-access-7hklx\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.236054 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.248968 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.262469 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.274792 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.286900 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.297493 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.301934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.301977 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.301987 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.302003 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.302013 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.310146 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.319166 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.324372 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.324416 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.324436 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.324462 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hklx\" (UniqueName: \"kubernetes.io/projected/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-kube-api-access-7hklx\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.325490 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.325625 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.330634 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.333551 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.342915 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hklx\" (UniqueName: \"kubernetes.io/projected/f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4-kube-api-access-7hklx\") pod \"ovnkube-control-plane-749d76644c-w9vpq\" (UID: \"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.348156 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.361096 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.373904 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.391491 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.406277 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.407668 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.407715 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.407732 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.407755 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.407772 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.417817 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.429403 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.447007 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.464869 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:37Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.471026 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.511371 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.511425 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.511439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.511463 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.511482 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.614680 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.614721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.614755 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.614775 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.614788 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.680027 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:18:25.939806233 +0000 UTC Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.716743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.716772 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.716781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.716801 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.716813 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.758378 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:37 crc kubenswrapper[4896]: E0126 15:34:37.758500 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.820377 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.820433 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.820449 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.820470 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.820485 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.923440 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.923477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.923487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.923503 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:37 crc kubenswrapper[4896]: I0126 15:34:37.923512 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:37Z","lastTransitionTime":"2026-01-26T15:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.025989 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.026133 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.026148 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.026166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.026178 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.049518 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/0.log" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.052355 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.052444 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.053344 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" event={"ID":"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4","Type":"ContainerStarted","Data":"075217ec760a4943202c3b8232455f8493209a0e7a6782e37a8d1ad1fdd6b945"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.064357 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.075425 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.087528 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.105883 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.121156 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.128972 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.129002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.129011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.129024 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.129033 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.132762 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.143767 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.144822 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.157327 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.170541 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.188297 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.202071 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.213409 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.224403 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.231761 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.231810 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.231821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.231838 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.231854 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.236248 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.256044 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.334168 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.334208 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.334219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.334233 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.334244 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.436500 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.436561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.436569 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.436595 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.436604 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.539199 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.539251 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.539264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.539285 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.539304 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.642680 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.642717 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.642726 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.642742 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.642753 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.680572 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:48:05.152946696 +0000 UTC Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.744995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.745044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.745055 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.745074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.745088 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.759302 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:38 crc kubenswrapper[4896]: E0126 15:34:38.759458 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.759511 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:38 crc kubenswrapper[4896]: E0126 15:34:38.759669 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.847835 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.847883 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.847896 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.848204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.848226 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.951103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.951158 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.951170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.951188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:38 crc kubenswrapper[4896]: I0126 15:34:38.951200 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:38Z","lastTransitionTime":"2026-01-26T15:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.053828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.053879 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.053892 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.053916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.053932 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.057993 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" event={"ID":"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4","Type":"ContainerStarted","Data":"7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.058047 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" event={"ID":"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4","Type":"ContainerStarted","Data":"66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.060297 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/1.log" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.061055 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/0.log" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.063864 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310" exitCode=1 Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.063903 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.063953 4896 scope.go:117] "RemoveContainer" containerID="0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.064615 4896 scope.go:117] "RemoveContainer" containerID="5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.064787 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.072320 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.086417 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.103407 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.122827 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.138998 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.157447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.157489 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.157502 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.157522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.157536 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.160272 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.176970 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.191231 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.205045 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.219810 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.234424 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.248798 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.259781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.259824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.259835 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.259853 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.259866 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.264482 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.287357 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.287416 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.287435 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.287459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.287478 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.288588 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.304677 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.306423 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.310499 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.310552 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.310573 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.310645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.310669 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.325223 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.326926 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.329564 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.329641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.329664 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.329693 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.329709 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.344659 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.346866 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.347165 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.347145342 +0000 UTC m=+53.129025735 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.347625 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.351450 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.351536 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.351561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.351628 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.351667 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.357706 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.364632 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.368951 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.369045 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.369060 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.369106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.369121 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.369499 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.372508 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-klrrb"] Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.373032 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.373096 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.381337 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.383856 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.384007 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.385558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.385746 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.385869 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.386006 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.386121 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.393469 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.405517 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.416357 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.427671 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.443859 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447525 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447608 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxts\" (UniqueName: \"kubernetes.io/projected/fbeb890e-90af-4b15-a106-27b03465209f-kube-api-access-rmxts\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447629 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447652 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.447672 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447716 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447772 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.447755222 +0000 UTC m=+53.229635615 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447775 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447777 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447818 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447834 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447897 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.447875305 +0000 UTC m=+53.229755778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447790 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447937 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447971 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.447963247 +0000 UTC m=+53.229843750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.447724 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.448015 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.448006368 +0000 UTC m=+53.229886871 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.458097 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.467697 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.478227 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.489275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.489306 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.489318 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.489336 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.489347 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.492367 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.507256 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.521446 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.530394 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.546319 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed7301ec64acae16473fa633c53ea0c48896dd6465a2996ddd98906e623044d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:36Z\\\",\\\"message\\\":\\\"I0126 15:34:36.097713 6152 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 15:34:36.097738 6152 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 15:34:36.097749 6152 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 15:34:36.097780 6152 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 15:34:36.098560 6152 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 15:34:36.098567 6152 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 15:34:36.098562 6152 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 15:34:36.098657 6152 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 15:34:36.098673 6152 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:36.098571 6152 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 15:34:36.098708 6152 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:36.098781 6152 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:36.098807 6152 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:36.098824 6152 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:36.098850 6152 factory.go:656] Stopping watch factory\\\\nI0126 15:34:36.098880 6152 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.548543 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.548598 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxts\" (UniqueName: \"kubernetes.io/projected/fbeb890e-90af-4b15-a106-27b03465209f-kube-api-access-rmxts\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.548700 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.548766 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:34:40.048747611 +0000 UTC m=+37.830628004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.558101 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.564060 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxts\" (UniqueName: \"kubernetes.io/projected/fbeb890e-90af-4b15-a106-27b03465209f-kube-api-access-rmxts\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.569410 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.579629 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.591268 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.591296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.591326 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.591343 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.591355 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.594925 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.604947 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.621023 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.633457 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.650277 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.660451 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.672221 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.680944 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:17:30.826604776 +0000 UTC Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.681044 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.694111 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.694153 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.694166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.694187 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.694201 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.697470 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.710438 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:39Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.777814 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.777896 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.778139 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:39 crc kubenswrapper[4896]: E0126 15:34:39.778446 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.802085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.802204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.802218 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.802244 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.802259 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.905870 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.906195 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.906303 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.906404 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:39 crc kubenswrapper[4896]: I0126 15:34:39.906545 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:39Z","lastTransitionTime":"2026-01-26T15:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.009038 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.009114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.009128 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.009152 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.009170 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.068297 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/1.log" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.072304 4896 scope.go:117] "RemoveContainer" containerID="5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310" Jan 26 15:34:40 crc kubenswrapper[4896]: E0126 15:34:40.072462 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.081069 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:40 crc kubenswrapper[4896]: E0126 15:34:40.081274 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:40 crc kubenswrapper[4896]: E0126 15:34:40.081351 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:34:41.081330442 +0000 UTC m=+38.863210835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.086359 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.102662 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.111957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.112001 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.112012 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.112029 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.112039 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.118752 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.133669 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.149687 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.162460 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.177121 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.194919 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.206167 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.214281 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.214315 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.214322 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.214337 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.214347 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.224391 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.243138 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.254263 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.267303 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.282004 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.296030 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.308760 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.316885 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.316922 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.316931 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.316946 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.316956 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.419564 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.419969 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.419979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.419995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.420005 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.524249 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.524325 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.524347 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.524379 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.524419 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.627159 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.627224 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.627246 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.627271 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.627290 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.681819 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:09:19.653293601 +0000 UTC Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.729502 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.729541 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.729552 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.729567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.729610 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.759386 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.759431 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:40 crc kubenswrapper[4896]: E0126 15:34:40.759560 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:40 crc kubenswrapper[4896]: E0126 15:34:40.759783 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.831436 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.831835 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.831863 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.831885 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.831901 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.934487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.934568 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.934610 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.934630 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:40 crc kubenswrapper[4896]: I0126 15:34:40.934642 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:40Z","lastTransitionTime":"2026-01-26T15:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.037435 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.037513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.037537 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.037566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.037647 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.090231 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:41 crc kubenswrapper[4896]: E0126 15:34:41.090443 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:41 crc kubenswrapper[4896]: E0126 15:34:41.090529 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:34:43.090506755 +0000 UTC m=+40.872387248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.140251 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.140293 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.140301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.140318 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.140330 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.243439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.243519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.243537 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.243558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.243609 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.347900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.347944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.347954 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.347972 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.347983 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.451626 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.451669 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.451683 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.451702 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.451716 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.555317 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.555354 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.555380 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.555397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.555416 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.658162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.658204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.658239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.658255 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.658268 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.681993 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:09:49.704502868 +0000 UTC Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.759141 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.759191 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:41 crc kubenswrapper[4896]: E0126 15:34:41.759389 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:41 crc kubenswrapper[4896]: E0126 15:34:41.760099 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.760512 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.760553 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.760566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.760622 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.760638 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.863394 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.863504 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.863523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.863547 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.863567 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.966154 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.966192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.966200 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.966212 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:41 crc kubenswrapper[4896]: I0126 15:34:41.966220 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:41Z","lastTransitionTime":"2026-01-26T15:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.069080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.069130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.069146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.069171 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.069187 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.171247 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.171293 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.171305 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.171321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.171333 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.273964 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.274009 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.274020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.274037 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.274048 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.376246 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.376283 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.376296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.376312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.376322 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.478623 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.478709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.478720 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.478739 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.478751 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.582448 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.582520 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.582542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.582570 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.582634 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.682301 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:51:54.265642412 +0000 UTC Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.685532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.685697 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.685719 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.685744 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.685770 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.759062 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:42 crc kubenswrapper[4896]: E0126 15:34:42.759316 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.759422 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:42 crc kubenswrapper[4896]: E0126 15:34:42.759632 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.785900 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.788556 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.788653 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.788680 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.788710 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.788741 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.799284 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.814251 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.827218 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.842811 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.857439 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.871478 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.882695 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.891864 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.891904 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.891913 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.891927 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.891940 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.895002 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.905877 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.916224 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.928257 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.942844 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.952445 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.978949 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.995323 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.995414 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.995433 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.995483 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.995498 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:42Z","lastTransitionTime":"2026-01-26T15:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:42 crc kubenswrapper[4896]: I0126 15:34:42.997346 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.098196 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.098423 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.098459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.098494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.098517 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.163147 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:43 crc kubenswrapper[4896]: E0126 15:34:43.163410 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:43 crc kubenswrapper[4896]: E0126 15:34:43.163542 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:34:47.163506077 +0000 UTC m=+44.945386530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.202729 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.202840 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.202862 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.202924 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.202948 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.306777 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.306869 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.306887 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.306910 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.306958 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.410279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.410342 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.410365 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.410396 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.410418 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.514032 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.514139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.514173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.514205 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.514227 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.616753 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.616800 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.616818 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.616842 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.616859 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.682968 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:47:38.291532863 +0000 UTC Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.719855 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.719910 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.719928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.719944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.719956 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.759300 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.759345 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:43 crc kubenswrapper[4896]: E0126 15:34:43.759507 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:43 crc kubenswrapper[4896]: E0126 15:34:43.759616 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.823159 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.823208 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.823221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.823239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.823251 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.925771 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.925828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.925844 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.925866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:43 crc kubenswrapper[4896]: I0126 15:34:43.925884 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:43Z","lastTransitionTime":"2026-01-26T15:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.030037 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.030130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.030184 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.030208 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.030223 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.178086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.178153 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.178176 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.178204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.178224 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.280948 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.281011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.281032 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.281049 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.281061 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.383385 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.383457 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.383479 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.383507 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.383529 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.485897 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.485934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.485946 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.485963 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.485974 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.589799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.589911 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.589929 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.589950 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.589963 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.683856 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:28:04.446798627 +0000 UTC Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.692839 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.692975 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.692995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.693019 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.693038 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.758705 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.758762 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:44 crc kubenswrapper[4896]: E0126 15:34:44.758903 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:44 crc kubenswrapper[4896]: E0126 15:34:44.759044 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.796409 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.796471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.796486 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.796505 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.796518 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.899122 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.899171 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.899186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.899203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:44 crc kubenswrapper[4896]: I0126 15:34:44.899215 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:44Z","lastTransitionTime":"2026-01-26T15:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.002478 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.002534 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.002549 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.002567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.002597 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.105359 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.105421 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.105441 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.105466 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.105483 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.208008 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.208103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.208130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.208160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.208181 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.311120 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.311175 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.311191 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.311215 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.311231 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.414016 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.414095 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.414119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.414150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.414173 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.517140 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.517196 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.517219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.517244 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.517267 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.620384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.620485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.620501 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.620523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.620539 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.684477 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:20:38.738595965 +0000 UTC Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.723736 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.723803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.723820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.723842 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.723865 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.758319 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.758386 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:45 crc kubenswrapper[4896]: E0126 15:34:45.758485 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:45 crc kubenswrapper[4896]: E0126 15:34:45.758707 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.827301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.827354 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.827367 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.827388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.827402 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.930528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.930634 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.930653 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.930676 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:45 crc kubenswrapper[4896]: I0126 15:34:45.930696 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:45Z","lastTransitionTime":"2026-01-26T15:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.034075 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.034139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.034149 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.034164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.034183 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.137779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.138159 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.138508 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.138760 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.138944 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.242054 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.242101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.242114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.242132 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.242147 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.344884 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.345151 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.345241 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.345366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.345469 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.447522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.447618 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.447645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.447673 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.447693 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.550023 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.550094 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.550119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.550146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.550162 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.652872 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.652919 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.652931 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.652947 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.652959 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.685654 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:19:35.194014187 +0000 UTC Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.756648 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.756684 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.756694 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.756709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.756718 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.758655 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.758720 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:46 crc kubenswrapper[4896]: E0126 15:34:46.758827 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:46 crc kubenswrapper[4896]: E0126 15:34:46.758980 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.859121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.859166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.859177 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.859192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.859203 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.961646 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.961681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.961689 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.961711 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:46 crc kubenswrapper[4896]: I0126 15:34:46.961730 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:46Z","lastTransitionTime":"2026-01-26T15:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.067881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.067933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.067949 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.067972 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.067984 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.170840 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.170926 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.170957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.170990 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.171012 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.204472 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:47 crc kubenswrapper[4896]: E0126 15:34:47.204756 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:47 crc kubenswrapper[4896]: E0126 15:34:47.204863 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:34:55.204836083 +0000 UTC m=+52.986716516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.274495 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.274565 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.274639 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.274671 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.274695 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.377865 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.377934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.377952 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.377979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.378016 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.481484 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.481542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.481564 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.481619 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.481639 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.584720 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.584797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.584816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.584839 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.584856 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.685858 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:18:39.443323588 +0000 UTC Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.687450 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.687501 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.687515 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.687535 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.687549 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.759306 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.759393 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:47 crc kubenswrapper[4896]: E0126 15:34:47.759514 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:47 crc kubenswrapper[4896]: E0126 15:34:47.759688 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.790567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.790675 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.790707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.790736 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.790758 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.893764 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.893841 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.893875 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.893909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.893934 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.997448 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.997520 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.997631 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.997666 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:47 crc kubenswrapper[4896]: I0126 15:34:47.997688 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:47Z","lastTransitionTime":"2026-01-26T15:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.100441 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.100524 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.100549 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.100617 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.100639 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.203533 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.203614 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.203625 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.203642 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.203653 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.306707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.306801 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.306825 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.306857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.306880 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.410086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.410164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.410187 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.410215 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.410239 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.516758 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.516798 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.516811 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.516828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.516840 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.619020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.619066 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.619081 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.619103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.619120 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.686324 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:35:11.210741449 +0000 UTC Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.722071 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.722129 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.722146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.722168 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.722186 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.758837 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:48 crc kubenswrapper[4896]: E0126 15:34:48.759030 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.759558 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:48 crc kubenswrapper[4896]: E0126 15:34:48.759755 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.824619 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.824659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.824674 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.824696 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.824711 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.927782 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.927819 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.927860 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.927877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:48 crc kubenswrapper[4896]: I0126 15:34:48.927888 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:48Z","lastTransitionTime":"2026-01-26T15:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.030956 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.031034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.031051 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.031075 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.031092 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.133239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.133319 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.133340 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.133364 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.133380 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.235999 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.236042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.236054 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.236073 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.236086 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.338659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.338709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.338721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.338740 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.338753 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.441163 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.441199 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.441209 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.441223 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.441234 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.544275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.544321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.544333 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.544346 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.544355 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.586306 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.586665 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.586808 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.586900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.587057 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.603666 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:49Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.607891 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.607922 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.607929 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.607943 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.607953 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.620489 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:49Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.624098 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.624134 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.624147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.624166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.624178 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.641159 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:49Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.644736 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.644781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.644798 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.644816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.644829 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.659168 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:49Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.663522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.663566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.663595 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.663612 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.663624 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.677856 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:49Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.678168 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.680027 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.680069 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.680077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.680092 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.680102 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.686677 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:15:45.116218062 +0000 UTC Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.758736 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.758844 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.758878 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:49 crc kubenswrapper[4896]: E0126 15:34:49.759005 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.782321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.782365 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.782377 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.782397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.782412 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.886780 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.886834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.886847 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.886865 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.887337 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.990899 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.990962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.990979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.991002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:49 crc kubenswrapper[4896]: I0126 15:34:49.991022 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:49Z","lastTransitionTime":"2026-01-26T15:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.094299 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.094347 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.094365 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.094388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.094400 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.197104 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.197136 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.197146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.197162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.197171 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.300492 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.300557 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.300574 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.300625 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.300642 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.403991 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.404040 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.404050 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.404072 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.404087 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.506803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.506874 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.506886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.506900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.506912 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.609418 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.609501 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.609519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.609542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.609559 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.687785 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:26:10.745463293 +0000 UTC Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.712431 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.712559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.712623 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.712658 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.712677 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.807949 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.808126 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.807979 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:50 crc kubenswrapper[4896]: E0126 15:34:50.808243 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:50 crc kubenswrapper[4896]: E0126 15:34:50.808452 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:50 crc kubenswrapper[4896]: E0126 15:34:50.808704 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.816248 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.816290 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.816301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.816317 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.816329 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.919219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.919292 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.919314 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.919344 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:50 crc kubenswrapper[4896]: I0126 15:34:50.919365 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:50Z","lastTransitionTime":"2026-01-26T15:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.021794 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.021851 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.021866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.021891 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.021906 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.124719 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.124778 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.124789 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.124806 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.124817 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.227600 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.227665 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.227677 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.227700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.227723 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.330460 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.330508 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.330519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.330534 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.330543 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.432566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.432627 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.432635 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.432648 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.432657 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.535239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.535279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.535290 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.535308 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.535320 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.638388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.638791 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.638811 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.638828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.638841 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.688428 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:24:00.146993953 +0000 UTC Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.741947 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.742010 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.742034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.742063 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.742084 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.759049 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:51 crc kubenswrapper[4896]: E0126 15:34:51.759155 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.845036 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.845099 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.845120 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.845155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.845175 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.947704 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.947752 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.947769 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.947794 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:51 crc kubenswrapper[4896]: I0126 15:34:51.947811 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:51Z","lastTransitionTime":"2026-01-26T15:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.050751 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.050818 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.050839 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.050867 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.050887 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.153952 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.154002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.154014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.154031 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.154043 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.256846 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.256895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.256905 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.256918 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.256926 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.359853 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.360074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.360137 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.360197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.360278 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.464841 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.464890 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.464907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.464929 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.464943 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.568256 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.568288 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.568297 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.568310 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.568319 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.670286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.670336 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.670352 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.670372 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.670389 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.689135 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:22:25.741623221 +0000 UTC Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.758630 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.758764 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.758842 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:52 crc kubenswrapper[4896]: E0126 15:34:52.758858 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:52 crc kubenswrapper[4896]: E0126 15:34:52.758900 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:52 crc kubenswrapper[4896]: E0126 15:34:52.758976 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.773487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.773816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.774026 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.774245 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.774440 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.783960 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.802811 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.816443 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.830684 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.846499 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.861721 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.876384 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.877820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.877863 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.877878 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.877899 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.877915 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.887468 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.900558 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.913069 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.925071 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.937327 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.963633 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.980043 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.980065 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.980074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.980088 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.980096 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:52Z","lastTransitionTime":"2026-01-26T15:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:52 crc kubenswrapper[4896]: I0126 15:34:52.987051 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.000879 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:52Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.012826 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:53Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.082228 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.082274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.082289 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.082310 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.082324 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.184172 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.184412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.184539 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.184667 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.184777 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.286840 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.286869 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.286876 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.286888 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.286896 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.390108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.390160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.390173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.390191 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.390216 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.496939 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.497016 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.497460 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.497501 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.497514 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.599824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.599860 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.599871 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.599885 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.599898 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.690662 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:20:59.67095014 +0000 UTC Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.702337 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.702388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.702411 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.702440 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.702462 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.759249 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:53 crc kubenswrapper[4896]: E0126 15:34:53.759751 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.806698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.806739 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.806749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.806762 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.806774 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.910051 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.910353 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.910548 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.910779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:53 crc kubenswrapper[4896]: I0126 15:34:53.910986 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:53Z","lastTransitionTime":"2026-01-26T15:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.014756 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.015191 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.015655 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.016029 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.016403 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.119743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.119988 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.120047 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.120115 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.120169 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.223336 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.223394 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.223411 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.223436 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.223453 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.326232 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.326286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.326296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.326310 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.326318 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.428481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.428562 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.428620 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.428657 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.428680 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.531320 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.531370 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.531390 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.531414 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.531432 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.634410 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.634446 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.634479 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.634496 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.634509 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.691901 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:06:28.286065862 +0000 UTC Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.737353 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.737477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.737496 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.737519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.737537 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.759078 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.759168 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:54 crc kubenswrapper[4896]: E0126 15:34:54.759259 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:54 crc kubenswrapper[4896]: E0126 15:34:54.759349 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.759415 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:54 crc kubenswrapper[4896]: E0126 15:34:54.759506 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.840085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.840141 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.840158 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.840181 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.840200 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.943129 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.943241 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.943262 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.943285 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:54 crc kubenswrapper[4896]: I0126 15:34:54.943303 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:54Z","lastTransitionTime":"2026-01-26T15:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.047970 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.048017 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.048028 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.048044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.048054 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.150325 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.150424 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.150448 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.150523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.150541 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.254346 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.254434 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.254454 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.254479 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.254497 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.289512 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.289791 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.289900 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:35:11.289872915 +0000 UTC m=+69.071753338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.357941 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.358002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.358012 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.358028 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.358055 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.390632 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.390803 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:35:27.390767732 +0000 UTC m=+85.172648125 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.460798 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.460852 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.460870 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.460893 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.460910 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.491858 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.491965 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.492035 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.492098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492284 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492316 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492338 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492419 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:27.492392075 +0000 UTC m=+85.274272508 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492891 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.492973 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:27.49295241 +0000 UTC m=+85.274832833 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.493030 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.493144 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:27.493115183 +0000 UTC m=+85.274995616 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.493439 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.493654 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.493802 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.494230 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:35:27.49419281 +0000 UTC m=+85.276073293 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.496003 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.512821 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.514705 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.536122 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.548650 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.564373 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.564431 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.564449 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.564471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.564485 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.566034 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.582682 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.598554 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.619957 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.634659 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.656874 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.667690 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.667772 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.667797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.667827 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.667847 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.672266 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.692289 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.692275 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:58:29.003170033 +0000 UTC Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.709134 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.733665 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.756969 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.759182 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:55 crc kubenswrapper[4896]: E0126 15:34:55.759351 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.760525 4896 scope.go:117] "RemoveContainer" containerID="5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.770013 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.770076 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.770101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.770132 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.770156 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.774632 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.791638 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:55Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.874816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.875121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.875139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.875162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.875179 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.982635 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.982700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.982719 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.982747 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:55 crc kubenswrapper[4896]: I0126 15:34:55.982767 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:55Z","lastTransitionTime":"2026-01-26T15:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.085597 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.085661 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.085677 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.085700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.085714 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.127170 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/1.log" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.130356 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.131259 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.150654 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.170687 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.190245 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.190288 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.190301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.190319 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.190333 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.191786 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.203001 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.278211 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.331758 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.331820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.331836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.331852 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.331862 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.333812 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.347147 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.361937 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.373689 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.385109 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.403063 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.416700 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.427179 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.434353 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.434388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.434397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.434412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.434422 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.436209 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.446923 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.458507 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.467446 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:56Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.537285 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.537354 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.537365 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.537403 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.537415 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.640883 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.640914 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.640922 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.640936 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.640945 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.693347 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:27:06.05287046 +0000 UTC Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.743218 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.743256 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.743264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.743279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.743289 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.759070 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:56 crc kubenswrapper[4896]: E0126 15:34:56.759173 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.759190 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.759081 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:56 crc kubenswrapper[4896]: E0126 15:34:56.759272 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:56 crc kubenswrapper[4896]: E0126 15:34:56.759340 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.846400 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.846719 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.846799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.846886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.846972 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.949714 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.949768 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.949782 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.949801 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:56 crc kubenswrapper[4896]: I0126 15:34:56.949817 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:56Z","lastTransitionTime":"2026-01-26T15:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.056563 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.056875 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.056969 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.057067 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.057139 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.135382 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/2.log" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.135882 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/1.log" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.137726 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" exitCode=1 Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.137769 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.137799 4896 scope.go:117] "RemoveContainer" containerID="5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.138472 4896 scope.go:117] "RemoveContainer" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" Jan 26 15:34:57 crc kubenswrapper[4896]: E0126 15:34:57.138626 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.156342 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.160078 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.160112 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.160123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.160140 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.160151 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.168111 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.180288 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.193419 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.208900 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.220718 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.231325 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.243268 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.255450 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.263012 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.263096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.263113 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.263139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.263157 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.267785 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.287303 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5564bdc8e306cc8be9e13425383378713a5ee6c9c1bba7d8b893f3c07b451310\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:38Z\\\",\\\"message\\\":\\\"rc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:38Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:34:38.395551 6287 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.302634 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.313133 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.324934 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.338273 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.351011 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.361369 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:57Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.364939 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.365066 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.365152 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.365241 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.365316 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.468441 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.468497 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.468509 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.468528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.468542 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.570797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.570830 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.570838 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.570852 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.570861 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.674187 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.674231 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.674244 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.674262 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.674274 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.693634 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:20:34.477631527 +0000 UTC Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.759237 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:57 crc kubenswrapper[4896]: E0126 15:34:57.759533 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.776904 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.776958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.776971 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.776990 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.777006 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.878867 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.878907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.878920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.878935 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.878947 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.982216 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.982264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.982279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.982297 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:57 crc kubenswrapper[4896]: I0126 15:34:57.982308 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:57Z","lastTransitionTime":"2026-01-26T15:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.084697 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.084743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.084754 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.084772 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.084785 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.143772 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/2.log" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.148834 4896 scope.go:117] "RemoveContainer" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" Jan 26 15:34:58 crc kubenswrapper[4896]: E0126 15:34:58.149070 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.161519 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.182861 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.187983 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.188020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.188029 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.188042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.188054 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.200245 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.213935 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.224622 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.237680 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.247961 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.259221 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.270516 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.281332 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.290934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.290997 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.291009 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.291027 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.291038 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.295253 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.307367 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.319555 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.333050 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.345745 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.380325 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.392249 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:58Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.393850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.393903 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.393915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.393932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.393943 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.496113 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.496150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.496158 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.496189 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.496200 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.599622 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.599659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.599668 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.599682 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.599691 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.694600 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:50:36.054790316 +0000 UTC Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.702405 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.702447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.702456 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.702473 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.702483 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.758975 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.758984 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.759143 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:34:58 crc kubenswrapper[4896]: E0126 15:34:58.759280 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:34:58 crc kubenswrapper[4896]: E0126 15:34:58.759463 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:34:58 crc kubenswrapper[4896]: E0126 15:34:58.759606 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.805251 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.805325 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.805354 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.805386 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.805406 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.907600 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.907661 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.907681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.907704 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:58 crc kubenswrapper[4896]: I0126 15:34:58.907722 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:58Z","lastTransitionTime":"2026-01-26T15:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.010135 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.010234 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.010267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.010300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.010324 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.115867 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.115935 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.115953 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.115975 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.116000 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.217984 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.218021 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.218032 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.218048 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.218060 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.321103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.321167 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.321180 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.321197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.321209 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.424724 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.424767 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.424783 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.424803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.424817 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.526886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.527223 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.527238 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.527255 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.527267 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.629739 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.629771 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.629779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.629793 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.629802 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.694788 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:41:53.781712666 +0000 UTC Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.731870 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.731897 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.731906 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.731919 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.731927 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.758873 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.758985 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.801960 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.802013 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.802032 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.802058 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.802076 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.825798 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:59Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.831703 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.831758 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.831774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.831797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.831815 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.850855 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:59Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.856209 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.856283 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.856305 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.856334 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.856409 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.878323 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:59Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.883099 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.883164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.883186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.883211 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.883232 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.906182 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:59Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.911447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.911566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.911628 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.911660 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.911684 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.933041 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:34:59Z is after 2025-08-24T17:21:41Z" Jan 26 15:34:59 crc kubenswrapper[4896]: E0126 15:34:59.933202 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.935365 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.935406 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.935420 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.935439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:34:59 crc kubenswrapper[4896]: I0126 15:34:59.935454 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:34:59Z","lastTransitionTime":"2026-01-26T15:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.039235 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.039295 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.039318 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.039346 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.039369 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.142121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.142181 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.142200 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.142228 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.142255 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.245597 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.245641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.245650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.245666 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.245677 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.348896 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.348965 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.348979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.348996 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.349008 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.451639 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.451678 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.451688 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.451704 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.451715 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.553956 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.554051 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.554078 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.554114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.554140 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.657194 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.657247 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.657258 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.657291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.657305 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.695880 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:04:40.374391126 +0000 UTC Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.758689 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.758754 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.758790 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:00 crc kubenswrapper[4896]: E0126 15:35:00.758845 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:00 crc kubenswrapper[4896]: E0126 15:35:00.759027 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:00 crc kubenswrapper[4896]: E0126 15:35:00.759255 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.760763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.760797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.760808 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.760824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.760836 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.863296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.863342 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.863355 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.863373 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.863386 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.965458 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.965504 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.965515 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.965532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:00 crc kubenswrapper[4896]: I0126 15:35:00.965543 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:00Z","lastTransitionTime":"2026-01-26T15:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.068743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.068793 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.068804 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.068820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.068831 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.171144 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.171212 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.171223 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.171244 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.171257 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.274271 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.274322 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.274340 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.274363 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.274380 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.377701 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.377791 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.377803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.377817 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.377827 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.480861 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.480947 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.480966 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.480992 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.481009 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.583400 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.583461 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.583480 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.583506 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.583523 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.686102 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.686154 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.686170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.686188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.686201 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.696567 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:34:01.780891387 +0000 UTC Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.758375 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:01 crc kubenswrapper[4896]: E0126 15:35:01.758716 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.788316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.788390 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.788408 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.788439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.788463 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.891483 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.891532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.891541 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.891556 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.891566 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.994833 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.994902 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.994920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.994943 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:01 crc kubenswrapper[4896]: I0126 15:35:01.994958 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:01Z","lastTransitionTime":"2026-01-26T15:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.097493 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.097542 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.097559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.097613 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.097631 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.201815 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.201889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.201904 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.201929 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.201944 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.305035 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.305102 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.305123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.305152 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.305182 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.410333 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.410430 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.410449 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.410536 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.410633 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.513850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.513911 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.513934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.513961 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.513981 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.618207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.618255 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.618267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.618283 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.618292 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.696978 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:17:42.838684814 +0000 UTC Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.721092 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.721132 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.721142 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.721158 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.721168 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.758327 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.758473 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:02 crc kubenswrapper[4896]: E0126 15:35:02.758505 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.759053 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:02 crc kubenswrapper[4896]: E0126 15:35:02.759267 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:02 crc kubenswrapper[4896]: E0126 15:35:02.760681 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.775615 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.791125 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.804468 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.821908 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.824404 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.824565 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.824626 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.824663 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.824689 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.843227 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.857519 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.876248 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.895061 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.911010 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.925044 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.927539 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.927620 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.927643 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.927664 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.927679 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:02Z","lastTransitionTime":"2026-01-26T15:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.958442 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.975123 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.986030 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:02 crc kubenswrapper[4896]: I0126 15:35:02.997387 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:02Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.010907 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031159 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031194 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031226 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.031276 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.041828 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:03Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.133698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.133737 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.133748 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.133763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.133775 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.236632 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.236712 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.236734 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.236763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.236784 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.339596 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.339639 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.339648 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.339662 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.339670 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.442430 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.442465 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.442474 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.442490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.442499 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.544263 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.544306 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.544319 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.544336 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.544348 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.646837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.646886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.646903 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.646923 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.646941 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.697930 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:38:37.721367039 +0000 UTC Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.750316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.750371 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.750397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.750413 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.750422 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.758366 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:03 crc kubenswrapper[4896]: E0126 15:35:03.758616 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.852356 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.852415 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.852426 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.852443 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.852454 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.954932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.954967 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.954977 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.954993 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:03 crc kubenswrapper[4896]: I0126 15:35:03.955003 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:03Z","lastTransitionTime":"2026-01-26T15:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.059175 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.059267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.059286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.059315 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.059333 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.161986 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.162034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.162050 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.162074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.162088 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.278490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.278549 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.278568 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.278626 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.278647 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.380096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.380132 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.380141 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.380153 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.380162 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.482396 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.482439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.482451 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.482466 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.482478 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.585080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.585117 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.585126 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.585139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.585147 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.687892 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.687990 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.688014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.688034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.688048 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.698337 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:06:14.630762595 +0000 UTC Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.758405 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.758523 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:04 crc kubenswrapper[4896]: E0126 15:35:04.758636 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.758548 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:04 crc kubenswrapper[4896]: E0126 15:35:04.758758 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:04 crc kubenswrapper[4896]: E0126 15:35:04.759307 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.790788 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.790824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.790834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.790850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.790861 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.892797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.892844 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.892856 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.892873 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.892885 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.995646 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.995708 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.995725 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.995748 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:04 crc kubenswrapper[4896]: I0126 15:35:04.995768 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:04Z","lastTransitionTime":"2026-01-26T15:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.099163 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.099226 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.099246 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.099269 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.099291 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.202085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.202147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.202166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.202188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.202205 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.304812 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.304854 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.304866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.304886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.304897 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.407455 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.407513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.407543 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.407574 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.407631 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.510831 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.510889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.510900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.510915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.510927 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.614161 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.614217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.614230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.614249 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.614261 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.698666 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:00:21.730350551 +0000 UTC Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.717017 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.717066 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.717081 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.717101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.717114 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.758555 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:05 crc kubenswrapper[4896]: E0126 15:35:05.758774 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.820573 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.820692 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.820710 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.820738 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.820760 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.923221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.923268 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.923281 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.923300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:05 crc kubenswrapper[4896]: I0126 15:35:05.923314 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:05Z","lastTransitionTime":"2026-01-26T15:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.025137 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.025183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.025195 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.025214 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.025240 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.129097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.129142 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.129151 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.129173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.129184 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.231987 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.232038 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.232048 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.232061 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.232071 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.334970 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.335010 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.335019 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.335034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.335043 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.437203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.437251 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.437268 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.437290 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.437307 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.539848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.539931 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.539962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.539996 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.540019 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.642819 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.642869 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.642880 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.642932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.642945 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.699067 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:46:24.435200498 +0000 UTC Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.745521 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.745571 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.745740 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.745759 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.745772 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.758774 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.758782 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.758867 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:06 crc kubenswrapper[4896]: E0126 15:35:06.758980 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:06 crc kubenswrapper[4896]: E0126 15:35:06.759077 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:06 crc kubenswrapper[4896]: E0126 15:35:06.759182 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.848607 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.848641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.848661 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.848699 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.848710 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.951332 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.951373 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.951385 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.951401 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:06 crc kubenswrapper[4896]: I0126 15:35:06.951412 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:06Z","lastTransitionTime":"2026-01-26T15:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.053836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.053883 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.053895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.053915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.053926 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.156715 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.156762 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.156774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.156792 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.156804 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.259434 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.259485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.259502 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.259520 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.259532 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.361743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.361784 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.361796 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.361811 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.361820 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.464165 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.464195 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.464203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.464216 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.464225 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.566809 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.566839 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.566848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.566863 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.566875 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.669299 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.669338 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.669352 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.669367 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.669377 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.699972 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:47:46.047924709 +0000 UTC Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.759293 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:07 crc kubenswrapper[4896]: E0126 15:35:07.759439 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.771850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.771906 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.771928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.771949 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.771963 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.874902 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.874951 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.874959 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.874971 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.874980 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.977164 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.977207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.977218 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.977234 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:07 crc kubenswrapper[4896]: I0126 15:35:07.977245 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:07Z","lastTransitionTime":"2026-01-26T15:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.079710 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.079746 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.079757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.079774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.079787 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.181149 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.181176 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.181183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.181196 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.181204 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.283479 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.283506 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.283517 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.283534 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.283545 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.386018 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.386076 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.386093 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.386118 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.386137 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.488718 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.488777 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.488788 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.488804 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.488817 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.591516 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.591561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.591615 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.591632 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.591643 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.694454 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.694490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.694502 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.694528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.694548 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.700977 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:54:06.176590724 +0000 UTC Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.758504 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.758599 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.758628 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:08 crc kubenswrapper[4896]: E0126 15:35:08.758689 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:08 crc kubenswrapper[4896]: E0126 15:35:08.758771 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:08 crc kubenswrapper[4896]: E0126 15:35:08.758918 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.796834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.796868 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.796877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.796892 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.796901 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.899343 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.899378 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.899387 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.899401 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:08 crc kubenswrapper[4896]: I0126 15:35:08.899410 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:08Z","lastTransitionTime":"2026-01-26T15:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.002047 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.002131 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.002145 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.002194 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.002208 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.105114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.105186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.105206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.105225 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.105239 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.207355 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.207380 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.207388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.207401 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.207411 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.309253 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.309291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.309300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.309312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.309320 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.411729 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.411771 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.411789 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.411805 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.411815 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.514044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.514089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.514101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.514117 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.514128 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.616830 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.616878 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.616889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.616906 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.616917 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.701822 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:33:42.367471985 +0000 UTC Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.719513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.719559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.719571 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.719606 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.719617 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.758959 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:09 crc kubenswrapper[4896]: E0126 15:35:09.759108 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.759933 4896 scope.go:117] "RemoveContainer" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" Jan 26 15:35:09 crc kubenswrapper[4896]: E0126 15:35:09.760224 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.822162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.822200 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.822207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.822219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.822228 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.925113 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.925169 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.925180 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.925197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:09 crc kubenswrapper[4896]: I0126 15:35:09.925208 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:09Z","lastTransitionTime":"2026-01-26T15:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.027909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.027945 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.027958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.027973 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.027984 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.049515 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.049558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.049569 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.049604 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.049616 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.062504 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.066108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.066139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.066147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.066162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.066170 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.079066 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.082651 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.082698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.082709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.082723 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.082733 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.093433 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.096802 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.096854 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.096867 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.096885 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.096896 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.110037 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.113097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.113134 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.113143 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.113157 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.113166 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.125178 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:10Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.125329 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.130866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.130895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.130903 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.130916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.130925 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.233876 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.233912 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.233923 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.233940 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.233949 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.337130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.337315 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.337337 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.337395 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.337425 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.441590 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.441622 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.441631 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.441644 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.441653 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.544728 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.544767 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.544778 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.544793 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.544804 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.647028 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.647065 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.647077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.647091 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.647102 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.702834 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:07:43.146952489 +0000 UTC Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.749558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.749627 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.749646 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.749664 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.749675 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.758918 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.759027 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.758929 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.759137 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.759208 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:10 crc kubenswrapper[4896]: E0126 15:35:10.759273 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.852455 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.852572 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.852616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.852640 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.852655 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.955530 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.955572 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.955600 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.955616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:10 crc kubenswrapper[4896]: I0126 15:35:10.955627 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:10Z","lastTransitionTime":"2026-01-26T15:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.058870 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.058925 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.058952 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.058980 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.059002 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.161559 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.161621 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.161634 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.161651 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.161661 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.264274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.264312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.264321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.264335 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.264347 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.366589 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.366626 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.366639 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.366655 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.366668 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.380458 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:11 crc kubenswrapper[4896]: E0126 15:35:11.380724 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:11 crc kubenswrapper[4896]: E0126 15:35:11.380815 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:35:43.380793692 +0000 UTC m=+101.162674165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.468597 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.468643 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.468654 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.468679 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.468691 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.572166 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.572203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.572214 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.572226 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.572235 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.675739 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.675776 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.675785 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.675799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.675810 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.703231 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:04:33.673691603 +0000 UTC Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.758734 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:11 crc kubenswrapper[4896]: E0126 15:35:11.758852 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.778612 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.778649 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.778661 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.778681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.778692 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.880927 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.880971 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.880982 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.880998 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.881009 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.983392 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.983428 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.983436 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.983449 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:11 crc kubenswrapper[4896]: I0126 15:35:11.983458 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:11Z","lastTransitionTime":"2026-01-26T15:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.085779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.085830 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.085844 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.085859 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.085870 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.187811 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.187841 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.187850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.187863 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.187871 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.290496 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.290547 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.290571 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.290641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.290659 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.393723 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.393835 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.393856 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.393879 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.393892 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.496953 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.497003 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.497014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.497032 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.497045 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.600108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.600152 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.600161 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.600210 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.600228 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.702474 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.702516 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.702524 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.702538 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.702549 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.703545 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:21:40.594114515 +0000 UTC Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.758326 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.758384 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:12 crc kubenswrapper[4896]: E0126 15:35:12.758468 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:12 crc kubenswrapper[4896]: E0126 15:35:12.758546 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.758614 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:12 crc kubenswrapper[4896]: E0126 15:35:12.758666 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.775609 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.788148 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.800018 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.806316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.806389 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.806406 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.806425 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.806442 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.816547 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.833979 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.845708 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.862181 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.877484 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.891228 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.904281 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.909020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.909064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.909077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.909097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.909108 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:12Z","lastTransitionTime":"2026-01-26T15:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.916940 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.928093 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.940320 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.953130 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.967550 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:12 crc kubenswrapper[4896]: I0126 15:35:12.982905 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:12Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.004492 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:13Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.011705 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.011793 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.011973 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.012014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.012028 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.115011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.115058 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.115071 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.115089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.115101 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.217555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.217606 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.217616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.217632 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.217641 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.320279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.320321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.320331 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.320345 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.320354 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.422895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.422935 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.422944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.422957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.422967 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.525771 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.525821 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.525837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.525857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.525873 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.628828 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.628862 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.628872 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.629066 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.629076 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.703949 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:52:57.894320402 +0000 UTC Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.732125 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.732165 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.732177 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.732196 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.732207 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.758625 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:13 crc kubenswrapper[4896]: E0126 15:35:13.758752 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.833963 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.834021 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.834031 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.834046 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.834057 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.936755 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.937065 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.937106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.937133 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:13 crc kubenswrapper[4896]: I0126 15:35:13.937143 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:13Z","lastTransitionTime":"2026-01-26T15:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.040094 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.040147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.040156 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.040168 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.040182 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.143416 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.143459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.143476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.143496 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.143514 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.196452 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/0.log" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.196507 4896 generic.go:334] "Generic (PLEG): container finished" podID="8c4023ce-9d03-491a-bbc6-d5afffb92f34" containerID="b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed" exitCode=1 Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.196540 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerDied","Data":"b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.196960 4896 scope.go:117] "RemoveContainer" containerID="b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.215721 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.226759 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.244498 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.246562 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.246612 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.246625 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.246641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.246650 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.258986 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.269509 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.284855 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.300227 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.309461 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.321544 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.332680 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.344496 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.348115 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.348162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.348170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.348186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.348197 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.357365 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.372359 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.383212 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.396455 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.409911 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.420125 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:14Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.450256 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.450293 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.450301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.450319 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.450332 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.553088 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.553126 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.553135 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.553150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.553162 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.654854 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.654886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.654895 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.654910 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.654919 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.705108 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:17:34.469523042 +0000 UTC Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.757370 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.757414 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.757423 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.757437 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.757446 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.758767 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:14 crc kubenswrapper[4896]: E0126 15:35:14.759416 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.758787 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:14 crc kubenswrapper[4896]: E0126 15:35:14.759534 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.758809 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:14 crc kubenswrapper[4896]: E0126 15:35:14.759617 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.860255 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.860292 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.860304 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.860323 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.860336 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.962165 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.962198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.962208 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.962221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:14 crc kubenswrapper[4896]: I0126 15:35:14.962230 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:14Z","lastTransitionTime":"2026-01-26T15:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.064676 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.064708 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.064716 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.064728 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.064737 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.166896 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.166976 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.166987 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.167005 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.167016 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.204969 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/0.log" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.205020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerStarted","Data":"a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.219281 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.232142 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.242267 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.250478 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.260832 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.269347 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.269368 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.269375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.269387 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.269397 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.271356 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.281627 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.293812 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.305160 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.315294 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.327635 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.338453 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.348800 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.367291 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.371275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.371302 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.371313 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.371328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.371340 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.383071 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.393942 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.404714 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:15Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.473315 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.473355 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.473364 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.473378 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.473386 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.575845 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.575887 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.575896 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.575910 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.575920 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.678171 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.678238 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.678250 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.678267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.678278 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.705285 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:23:04.745160283 +0000 UTC Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.758467 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:15 crc kubenswrapper[4896]: E0126 15:35:15.758625 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.780162 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.780188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.780198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.780210 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.780219 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.882670 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.882712 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.882721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.882739 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:15 crc kubenswrapper[4896]: I0126 15:35:15.882747 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:15Z","lastTransitionTime":"2026-01-26T15:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.123160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.123267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.123282 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.123305 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.123318 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.225928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.225982 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.225995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.226019 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.226035 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.328396 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.328471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.328487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.328518 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.328542 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.431009 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.431080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.431096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.431119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.431136 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.534183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.534234 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.534246 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.534264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.534280 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.637062 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.637148 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.637173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.637202 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.637224 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.705617 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:17:14.742215753 +0000 UTC Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.738808 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.738845 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.738857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.738873 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.738884 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.759261 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.759333 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:16 crc kubenswrapper[4896]: E0126 15:35:16.759394 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:16 crc kubenswrapper[4896]: E0126 15:35:16.759519 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.759621 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:16 crc kubenswrapper[4896]: E0126 15:35:16.759787 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.841657 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.841743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.841758 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.841774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.841786 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.944420 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.944495 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.944518 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.944548 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:16 crc kubenswrapper[4896]: I0126 15:35:16.944570 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:16Z","lastTransitionTime":"2026-01-26T15:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.047753 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.047804 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.047824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.047850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.047868 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.150901 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.150990 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.151016 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.151047 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.151067 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.254258 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.254300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.254309 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.254322 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.254331 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.356734 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.356818 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.356831 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.356851 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.356863 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.458892 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.458958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.458992 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.459035 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.459056 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.561621 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.561681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.561697 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.561723 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.561742 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.664035 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.664090 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.664107 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.664130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.664149 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.705749 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:04:48.266129568 +0000 UTC Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.758410 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:17 crc kubenswrapper[4896]: E0126 15:35:17.758547 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.766189 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.766220 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.766230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.766246 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.766257 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.868973 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.869018 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.869031 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.869050 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.869065 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.971146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.971198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.971206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.971219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:17 crc kubenswrapper[4896]: I0126 15:35:17.971230 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:17Z","lastTransitionTime":"2026-01-26T15:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.073328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.073382 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.073398 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.073420 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.073436 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.176714 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.176768 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.176780 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.176797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.176808 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.278772 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.278823 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.278836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.278852 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.278863 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.380727 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.380774 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.380786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.380799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.380808 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.483357 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.483388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.483399 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.483415 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.483429 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.586074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.586116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.586129 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.586145 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.586155 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.688955 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.689005 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.689017 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.689036 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.689050 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.706675 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:34:47.646815725 +0000 UTC Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.759282 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.759381 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:18 crc kubenswrapper[4896]: E0126 15:35:18.759421 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.759382 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:18 crc kubenswrapper[4896]: E0126 15:35:18.759549 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:18 crc kubenswrapper[4896]: E0126 15:35:18.759678 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.791911 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.791964 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.791979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.791997 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.792020 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.894034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.894097 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.894114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.894136 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.894154 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.996836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.996881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.996894 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.996915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:18 crc kubenswrapper[4896]: I0126 15:35:18.996929 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:18Z","lastTransitionTime":"2026-01-26T15:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.099746 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.099813 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.099822 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.099837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.099847 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.203161 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.203325 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.203345 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.203371 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.203389 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.306006 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.306046 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.306064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.306080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.306089 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.409814 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.409871 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.409889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.409913 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.409930 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.513489 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.513555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.513616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.513647 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.513668 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.621383 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.621437 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.621453 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.621477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.621493 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.707213 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:57:34.784550667 +0000 UTC Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.724536 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.724644 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.724673 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.724707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.724730 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.758961 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:19 crc kubenswrapper[4896]: E0126 15:35:19.759135 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.826702 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.826745 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.826756 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.826771 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.826782 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.929687 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.929737 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.929749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.929767 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:19 crc kubenswrapper[4896]: I0126 15:35:19.929780 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:19Z","lastTransitionTime":"2026-01-26T15:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.033012 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.033084 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.033099 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.033121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.033136 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.136185 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.136221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.136231 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.136248 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.136259 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.239518 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.239566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.239613 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.239630 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.239643 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.303795 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.303826 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.303834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.303848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.303856 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.328732 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.333145 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.333188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.333197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.333212 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.333223 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.345965 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.349962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.349992 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.350001 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.350014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.350022 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.362536 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.366038 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.366085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.366096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.366112 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.366124 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.376956 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.380482 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.380525 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.380541 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.380560 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.380573 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.391901 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:20Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.392015 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.393666 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.393692 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.393702 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.393716 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.393727 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.495818 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.495859 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.495871 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.495889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.495904 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.602353 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.602392 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.602403 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.602418 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.602428 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.704816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.704878 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.704896 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.704920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.704941 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.708110 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:26:02.603034552 +0000 UTC Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.759002 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.759054 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.759120 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.759196 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.759293 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:20 crc kubenswrapper[4896]: E0126 15:35:20.759461 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.807046 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.807118 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.807141 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.807170 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.807193 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.910234 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.910289 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.910303 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.910321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:20 crc kubenswrapper[4896]: I0126 15:35:20.910334 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:20Z","lastTransitionTime":"2026-01-26T15:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.013659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.013763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.013785 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.013809 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.013827 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.116681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.116752 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.116792 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.116824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.116847 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.219169 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.219211 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.219222 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.219237 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.219249 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.321372 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.321447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.321459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.321475 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.321486 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.423545 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.423637 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.423658 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.423681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.423693 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.526023 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.526077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.526094 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.526121 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.526136 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.628858 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.628905 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.628917 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.628935 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.628948 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.709244 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:42:20.274594084 +0000 UTC Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.730995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.731034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.731043 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.731056 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.731065 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.758787 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:21 crc kubenswrapper[4896]: E0126 15:35:21.758966 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.833889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.833933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.833945 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.833962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.833977 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.936126 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.936163 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.936177 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.936192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:21 crc kubenswrapper[4896]: I0126 15:35:21.936201 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:21Z","lastTransitionTime":"2026-01-26T15:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.038224 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.038270 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.038282 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.038298 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.038311 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.140813 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.140854 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.140866 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.140881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.140891 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.243267 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.243325 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.243339 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.243356 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.243367 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.346160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.346199 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.346211 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.346228 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.346239 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.449229 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.449268 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.449277 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.449291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.449302 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.551405 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.551447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.551454 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.551470 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.551480 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.654323 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.654373 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.654392 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.654415 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.654430 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.710417 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:50:59.719315559 +0000 UTC Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.757547 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.757608 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.757621 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.757638 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.757648 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.758323 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.758455 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:22 crc kubenswrapper[4896]: E0126 15:35:22.758516 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.758650 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:22 crc kubenswrapper[4896]: E0126 15:35:22.758655 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:22 crc kubenswrapper[4896]: E0126 15:35:22.758887 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.773762 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.786030 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.801546 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.813858 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.828762 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.842696 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.855232 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.860138 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.860172 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.860182 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.860198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.860212 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.868339 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.886767 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.900078 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.913639 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.926213 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.939058 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.961013 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.962971 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.963004 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.963014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.963030 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.963040 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:22Z","lastTransitionTime":"2026-01-26T15:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.977486 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:22 crc kubenswrapper[4896]: I0126 15:35:22.988196 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.001044 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:22Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.065135 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.065182 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.065192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.065207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.065218 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.167426 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.167467 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.167476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.167490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.167499 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.269883 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.269935 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.269945 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.269961 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.269971 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.372357 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.372406 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.372418 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.372436 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.372447 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.475351 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.475481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.475506 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.475532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.475678 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.578770 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.578819 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.578836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.578868 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.578885 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.681813 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.681854 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.681865 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.681881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.681893 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.710807 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:40:02.234417886 +0000 UTC Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.758790 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:23 crc kubenswrapper[4896]: E0126 15:35:23.759270 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.759717 4896 scope.go:117] "RemoveContainer" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.774384 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.785018 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.785329 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.785348 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.785371 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.785387 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.887561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.887618 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.887631 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.887645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.887654 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.989524 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.989573 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.989598 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.989612 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:23 crc kubenswrapper[4896]: I0126 15:35:23.989622 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:23Z","lastTransitionTime":"2026-01-26T15:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.094368 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.094397 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.094406 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.094421 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.094431 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.197341 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.197382 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.197393 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.197412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.197423 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.231367 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/2.log" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.233902 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.234611 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.248001 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a1f66b-b867-40f1-9a95-85bfc3a9af0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.266319 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.275411 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.285798 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.297009 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.299455 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.299493 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.299505 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.299520 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.299531 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.313339 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.324116 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.334469 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.347354 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.365927 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.485932 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.487264 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.487309 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.487324 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.487343 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.487357 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.498531 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.512142 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.528377 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.540658 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.551520 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.595602 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.595643 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.595654 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.595670 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.595682 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.596371 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.619270 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:24Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.697404 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.697434 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.697443 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.697480 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.697489 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.712015 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:43:26.176368956 +0000 UTC Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.759195 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.759264 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:24 crc kubenswrapper[4896]: E0126 15:35:24.759319 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:24 crc kubenswrapper[4896]: E0126 15:35:24.759407 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.759565 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:24 crc kubenswrapper[4896]: E0126 15:35:24.759665 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.799628 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.799688 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.799700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.799716 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.799730 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.902604 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.902648 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.902660 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.902677 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:24 crc kubenswrapper[4896]: I0126 15:35:24.902691 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:24Z","lastTransitionTime":"2026-01-26T15:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.005564 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.005671 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.005691 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.005714 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.005731 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.109231 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.109286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.109304 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.109329 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.109346 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.212079 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.212123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.212135 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.212155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.212166 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.314916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.314944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.314955 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.314970 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.314980 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.420456 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.420518 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.420535 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.420556 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.420596 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.523448 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.523490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.523499 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.523514 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.523525 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.625508 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.625555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.625567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.625600 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.625612 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.713081 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:14:02.890277262 +0000 UTC Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.728742 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.728838 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.728858 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.728882 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.728901 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.759063 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:25 crc kubenswrapper[4896]: E0126 15:35:25.759247 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.831434 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.831467 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.831477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.831492 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.831503 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.934497 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.934552 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.934570 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.934664 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:25 crc kubenswrapper[4896]: I0126 15:35:25.934683 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:25Z","lastTransitionTime":"2026-01-26T15:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.037913 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.037979 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.037997 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.038022 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.038041 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.141543 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.141641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.141655 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.141679 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.141696 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.244945 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.245000 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.245019 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.245041 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.245058 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.348662 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.348742 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.348757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.348775 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.348788 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.452292 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.452339 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.452350 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.452369 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.452381 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.554941 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.555004 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.555027 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.555056 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.555079 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.657411 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.657454 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.657466 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.657481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.657492 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.714177 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:23:21.569137779 +0000 UTC Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.758832 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.758850 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:26 crc kubenswrapper[4896]: E0126 15:35:26.758992 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.758856 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:26 crc kubenswrapper[4896]: E0126 15:35:26.759113 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:26 crc kubenswrapper[4896]: E0126 15:35:26.759238 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.760169 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.760203 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.760219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.760236 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.760249 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.862244 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.862303 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.862322 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.862344 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.862356 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.964843 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.964888 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.964900 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.964916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:26 crc kubenswrapper[4896]: I0126 15:35:26.964928 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:26Z","lastTransitionTime":"2026-01-26T15:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.068239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.068297 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.068316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.068340 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.068359 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.172337 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.172432 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.172460 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.172493 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.172519 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.247842 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/3.log" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.248782 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/2.log" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.252109 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" exitCode=1 Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.252178 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.252230 4896 scope.go:117] "RemoveContainer" containerID="d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.253516 4896 scope.go:117] "RemoveContainer" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.253881 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.274452 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a1f66b-b867-40f1-9a95-85bfc3a9af0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.275647 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.275690 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.275700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.275718 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.275727 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.298455 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.311481 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.325928 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.345659 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.360324 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.375294 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.378711 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.378764 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.378777 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.378797 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.378810 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.389999 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.409654 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.425550 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.425574 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.425769 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:31.425739648 +0000 UTC m=+149.207620091 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.438920 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.451308 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.465989 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.481084 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.483322 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.483352 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.483369 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.483386 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.483398 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.497705 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.511659 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.523980 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.526832 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.526887 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.526918 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.527012 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527151 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527178 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527191 4896 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527238 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:31.527222282 +0000 UTC m=+149.309102685 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527307 4896 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527344 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:31.527334585 +0000 UTC m=+149.309214988 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527400 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527411 4896 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527421 4896 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527458 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:31.527449628 +0000 UTC m=+149.309330031 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527493 4896 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.527519 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:31.527510099 +0000 UTC m=+149.309390502 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.548936 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:26Z\\\",\\\"message\\\":\\\"ailed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:35:25.441568 6911 services_controller.go:434] Service openshift-marketplace/marketplace-operator-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace ee1b3a20-644f-4c69-a038-1d53fcace871 4537 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00767e4b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:27Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.586080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.586124 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.586137 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.586155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.586166 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.689076 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.689118 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.689130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.689146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.689158 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.714696 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:26:21.77942619 +0000 UTC Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.758521 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:27 crc kubenswrapper[4896]: E0126 15:35:27.758737 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.791763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.791837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.791860 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.791890 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.791911 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.894386 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.894440 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.894457 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.894478 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.894495 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.997418 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.997460 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.997476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.997499 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:27 crc kubenswrapper[4896]: I0126 15:35:27.997516 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:27Z","lastTransitionTime":"2026-01-26T15:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.100815 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.100874 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.100898 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.100925 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.100947 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.204504 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.204554 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.204565 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.204607 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.204648 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.257951 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/3.log" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.307817 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.307875 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.307887 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.307905 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.307917 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.410514 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.410605 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.410627 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.410657 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.410680 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.513956 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.514011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.514024 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.514044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.514056 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.617477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.617524 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.617536 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.617554 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.617566 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.715160 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:00:14.997164174 +0000 UTC Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.720414 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.720485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.720511 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.720543 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.720567 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.758372 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.758387 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.758497 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:28 crc kubenswrapper[4896]: E0126 15:35:28.758752 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:28 crc kubenswrapper[4896]: E0126 15:35:28.758883 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:28 crc kubenswrapper[4896]: E0126 15:35:28.759105 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.825152 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.825206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.825217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.825231 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.825241 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.927491 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.927525 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.927533 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.927547 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:28 crc kubenswrapper[4896]: I0126 15:35:28.927556 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:28Z","lastTransitionTime":"2026-01-26T15:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.029942 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.029985 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.029993 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.030007 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.030016 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.132217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.132279 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.132298 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.132324 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.132344 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.234421 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.234481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.234502 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.234526 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.234550 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.338030 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.338089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.338106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.338130 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.338146 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.440944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.440985 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.440993 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.441007 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.441017 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.543601 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.543659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.543670 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.543689 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.543701 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.646930 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.646967 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.646978 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.646994 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.647005 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.716137 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:46:58.323181375 +0000 UTC Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.750136 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.750336 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.750344 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.750356 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.750365 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.759054 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:29 crc kubenswrapper[4896]: E0126 15:35:29.759275 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.852962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.853024 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.853041 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.853065 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.853082 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.956234 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.956291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.956307 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.956328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:29 crc kubenswrapper[4896]: I0126 15:35:29.956345 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:29Z","lastTransitionTime":"2026-01-26T15:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.059378 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.059418 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.059429 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.059444 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.059456 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.163221 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.163284 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.163301 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.163320 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.163335 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.267110 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.267156 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.267167 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.267186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.267201 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.369844 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.369904 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.369917 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.369938 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.369951 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.473988 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.474044 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.474061 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.474086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.474102 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.577316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.577770 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.578078 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.578332 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.578558 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.597038 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.597304 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.597479 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.597751 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.597953 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.616726 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.621701 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.621738 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.621749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.621766 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.621777 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.633721 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.637041 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.637289 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.637494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.637749 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.637953 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.657026 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.663014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.663064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.663077 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.663091 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.663102 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.682672 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.688063 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.688145 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.688165 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.688191 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.688208 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.707617 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:30Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.707865 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.709803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.709842 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.709856 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.709872 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.709885 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.716812 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:54:49.828786245 +0000 UTC Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.758891 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.758933 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.758933 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.759101 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.759444 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:30 crc kubenswrapper[4896]: E0126 15:35:30.759572 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.777105 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.813119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.813173 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.813186 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.813206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.813220 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.915883 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.915932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.915942 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.915958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:30 crc kubenswrapper[4896]: I0126 15:35:30.915968 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:30Z","lastTransitionTime":"2026-01-26T15:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.018952 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.019083 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.019101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.019127 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.019141 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.122606 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.122649 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.122659 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.122678 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.122692 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.225688 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.225753 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.225766 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.225786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.225798 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.329528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.329652 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.329665 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.329713 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.329730 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.434064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.434102 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.434115 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.434132 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.434143 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.536932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.536984 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.536995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.537023 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.537037 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.639014 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.639056 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.639066 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.639080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.639091 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.717947 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:01:06.551641112 +0000 UTC Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.741274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.741313 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.741323 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.741338 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.741349 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.758697 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:31 crc kubenswrapper[4896]: E0126 15:35:31.758895 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.844638 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.844698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.844756 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.844781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.844798 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.947192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.947257 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.947276 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.947303 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:31 crc kubenswrapper[4896]: I0126 15:35:31.947319 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:31Z","lastTransitionTime":"2026-01-26T15:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.049757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.049803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.049834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.049848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.049858 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.153405 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.153488 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.153510 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.153538 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.153555 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.255759 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.255848 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.255859 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.255894 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.255906 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.358103 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.358183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.358207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.358233 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.358251 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.460941 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.460977 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.460986 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.460999 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.461008 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.563500 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.563537 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.563550 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.563567 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.563595 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.666528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.666639 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.666653 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.666675 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.666691 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.718889 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:48:56.272411048 +0000 UTC Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.758445 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:32 crc kubenswrapper[4896]: E0126 15:35:32.758906 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.758782 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:32 crc kubenswrapper[4896]: E0126 15:35:32.759159 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.758621 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:32 crc kubenswrapper[4896]: E0126 15:35:32.759504 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.768807 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.768862 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.768877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.768894 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.768907 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.775113 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.790627 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.805453 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.821853 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.834742 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.851448 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.865901 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.871306 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.871644 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.871805 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.871929 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.872053 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.881732 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.896462 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.912321 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.932715 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74d5aec-7734-46a5-b505-ced276677e9d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bb4b26156eebe104fa7d48c28ed4a08235b86559e08a00f0ad0309dbe50b33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cf245537deb1adf1d9428c2540f5d05fd11fc83b7bbb7e3d589ccbe72a403e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ad7e201f7fe5178266b227227936ded00706faac9aed3a761171442dde253a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a8a120178ee8138e55bda65d5961982be475b4869c84dd87ccbbb6323ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25fbe9a2849497daf60146732051caa58ee0bea6d8f1cc7c9997290c5e382c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.953306 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.967530 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.977897 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.977945 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.977956 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.977976 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.977989 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:32Z","lastTransitionTime":"2026-01-26T15:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.980406 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:32 crc kubenswrapper[4896]: I0126 15:35:32.997952 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5d1b6b278161e192ef9a209511841948188b9d6ff06f25e7a1e911f9aa882fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:34:57Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050422 6501 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 15:34:57.050814 6501 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.050820 6501 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 15:34:57.051128 6501 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 15:34:57.051212 6501 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 15:34:57.051246 6501 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 15:34:57.051310 6501 factory.go:656] Stopping watch factory\\\\nI0126 15:34:57.051308 6501 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 15:34:57.051328 6501 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 15:34:57.051684 6501 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 15:34:57.052029 6501 ovnkube.go:599] Stopped ovnkube\\\\nI0126 15:34:57.052137 6501 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 15:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:26Z\\\",\\\"message\\\":\\\"ailed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:35:25.441568 6911 services_controller.go:434] Service openshift-marketplace/marketplace-operator-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace ee1b3a20-644f-4c69-a038-1d53fcace871 4537 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00767e4b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:32Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.009475 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a1f66b-b867-40f1-9a95-85bfc3a9af0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.027970 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.041711 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.054730 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:33Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.080630 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.080710 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.080721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.080740 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.080754 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.183507 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.183558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.183569 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.183603 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.183617 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.286330 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.286358 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.286366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.286380 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.286388 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.389245 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.389305 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.389317 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.389335 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.389349 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.491572 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.491658 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.491675 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.491695 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.491712 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.594745 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.594803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.594820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.594845 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.594862 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.698370 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.698426 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.698438 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.698460 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.698476 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.719811 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:13:17.859956757 +0000 UTC Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.759356 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:33 crc kubenswrapper[4896]: E0126 15:35:33.759640 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.802417 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.802477 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.802496 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.802519 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.802536 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.904399 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.904468 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.904487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.904513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:33 crc kubenswrapper[4896]: I0126 15:35:33.904529 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:33Z","lastTransitionTime":"2026-01-26T15:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.007820 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.007894 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.007907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.007931 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.007952 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.111823 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.111898 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.111909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.111937 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.111948 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.214786 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.214843 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.214856 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.214876 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.214888 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.317435 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.317514 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.317527 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.317548 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.317561 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.421728 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.421819 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.421833 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.421862 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.421879 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.525407 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.525457 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.525467 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.525483 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.525495 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.627653 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.627709 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.627720 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.627743 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.627757 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.720255 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:20:28.869593127 +0000 UTC Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.730330 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.730417 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.730431 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.730457 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.730472 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.758834 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:34 crc kubenswrapper[4896]: E0126 15:35:34.758979 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.758837 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.759011 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:34 crc kubenswrapper[4896]: E0126 15:35:34.759108 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:34 crc kubenswrapper[4896]: E0126 15:35:34.759175 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.833021 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.833090 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.833101 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.833123 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.833136 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.936404 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.936462 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.936471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.936487 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:34 crc kubenswrapper[4896]: I0126 15:35:34.936497 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:34Z","lastTransitionTime":"2026-01-26T15:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.039151 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.039214 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.039231 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.039256 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.039275 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.141464 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.141544 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.141616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.141681 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.141705 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.244980 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.245060 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.245086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.245119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.245144 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.349406 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.349493 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.349523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.349556 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.349639 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.452295 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.452360 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.452383 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.452412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.452430 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.555137 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.555187 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.555198 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.555219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.555233 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.658306 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.658366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.658375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.658393 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.658405 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.721184 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:37:45.428387597 +0000 UTC Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.759170 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:35 crc kubenswrapper[4896]: E0126 15:35:35.759489 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.761260 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.761314 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.761330 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.761351 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.761364 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.865230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.865286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.865296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.865315 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.865327 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.967833 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.967889 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.967902 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.967921 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:35 crc kubenswrapper[4896]: I0126 15:35:35.967932 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:35Z","lastTransitionTime":"2026-01-26T15:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.071876 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.071962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.071987 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.072016 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.072035 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.175382 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.175419 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.175431 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.175447 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.175458 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.278928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.278992 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.279002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.279020 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.279033 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.381275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.381335 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.381346 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.381358 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.381367 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.483851 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.483924 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.483937 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.483958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.483969 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.587271 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.587351 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.587366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.587384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.587398 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.690135 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.690174 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.690183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.690197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.690207 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.722155 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:39:01.675021551 +0000 UTC Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.759383 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.759421 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.759665 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:36 crc kubenswrapper[4896]: E0126 15:35:36.759803 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:36 crc kubenswrapper[4896]: E0126 15:35:36.759873 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:36 crc kubenswrapper[4896]: E0126 15:35:36.759980 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.792384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.792679 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.792769 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.792867 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.792958 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.895637 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.895920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.896057 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.896155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.896278 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.998537 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.998605 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.998621 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.998641 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:36 crc kubenswrapper[4896]: I0126 15:35:36.998656 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:36Z","lastTransitionTime":"2026-01-26T15:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.101293 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.101328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.101358 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.101375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.101386 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.204842 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.204902 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.204919 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.204940 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.204952 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.307627 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.308064 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.308280 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.308507 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.308754 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.411350 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.411614 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.411700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.411794 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.411955 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.514622 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.514675 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.514686 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.514699 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.514708 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.617265 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.617310 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.617321 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.617343 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.617356 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.719650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.719698 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.719903 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.719916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.719926 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.722943 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:18:18.179331521 +0000 UTC Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.758444 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:37 crc kubenswrapper[4896]: E0126 15:35:37.758707 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.822700 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.822760 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.822782 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.822803 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.822818 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.926230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.926302 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.926324 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.926353 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:37 crc kubenswrapper[4896]: I0126 15:35:37.926374 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:37Z","lastTransitionTime":"2026-01-26T15:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.029798 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.029868 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.029886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.029909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.029927 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.132134 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.132176 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.132188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.132204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.132216 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.235654 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.235716 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.235731 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.235765 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.235782 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.337763 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.337815 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.337826 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.337844 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.337856 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.440296 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.440340 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.440352 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.440371 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.440382 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.542958 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.543002 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.543013 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.543029 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.543040 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.645967 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.646011 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.646025 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.646042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.646053 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.723905 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:15:39.660620451 +0000 UTC Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.748995 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.749049 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.749060 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.749078 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.749090 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.758378 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.758381 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.758465 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:38 crc kubenswrapper[4896]: E0126 15:35:38.758572 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:38 crc kubenswrapper[4896]: E0126 15:35:38.758677 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:38 crc kubenswrapper[4896]: E0126 15:35:38.758785 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.851605 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.851657 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.851668 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.851687 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.851700 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.954312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.954346 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.954355 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.954370 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:38 crc kubenswrapper[4896]: I0126 15:35:38.954382 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:38Z","lastTransitionTime":"2026-01-26T15:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.057776 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.057857 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.057881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.057910 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.057930 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.160640 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.160686 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.160707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.160724 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.160735 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.263045 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.263080 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.263089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.263104 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.263115 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.365147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.365192 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.365204 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.365223 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.365233 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.468147 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.468215 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.468230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.468253 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.468266 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.572274 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.572316 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.572328 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.572344 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.572354 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.674901 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.674940 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.674948 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.674962 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.674976 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.724653 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:05:12.751130393 +0000 UTC Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.758283 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:39 crc kubenswrapper[4896]: E0126 15:35:39.758430 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.777292 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.777348 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.777363 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.777384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.777399 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.879899 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.879933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.879942 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.879957 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.879967 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.982650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.982714 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.982731 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.982754 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:39 crc kubenswrapper[4896]: I0126 15:35:39.982771 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:39Z","lastTransitionTime":"2026-01-26T15:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.084555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.084600 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.084609 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.084623 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.084632 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.187150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.187222 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.187240 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.187260 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.187275 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.290369 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.290407 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.290425 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.290444 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.290456 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.393388 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.393436 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.393456 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.393478 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.393498 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.496187 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.496219 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.496229 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.496245 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.496256 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.599565 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.599638 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.599650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.599666 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.599678 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.702442 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.702497 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.702510 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.702528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.702539 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.725204 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:08:37.143108401 +0000 UTC Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.758806 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.758841 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.758912 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.758972 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.759056 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.759407 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.759960 4896 scope.go:117] "RemoveContainer" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.760138 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.774600 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.790139 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.803011 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.805030 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.805377 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.805414 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.805504 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.805542 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.816988 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.830160 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.845647 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.860742 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.888645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.888712 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.888725 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.888744 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.888756 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.889887 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74d5aec-7734-46a5-b505-ced276677e9d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bb4b26156eebe104fa7d48c28ed4a08235b86559e08a00f0ad0309dbe50b33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cf245537deb1adf1d9428c2540f5d05fd11fc83b7bbb7e3d589ccbe72a403e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ad7e201f7fe5178266b227227936ded00706faac9aed3a761171442dde253a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a8a120178ee8138e55bda65d5961982be475b4869c84dd87ccbbb6323ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25fbe9a2849497daf60146732051caa58ee0bea6d8f1cc7c9997290c5e382c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.918099 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.925923 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.926285 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.926523 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.926683 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.926768 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.928087 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.945399 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.946267 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.949719 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.949769 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.949781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.949798 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.949810 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.958088 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.962648 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.965427 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.965464 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.965472 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.965489 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.965498 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.977145 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.979509 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:26Z\\\",\\\"message\\\":\\\"ailed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:35:25.441568 6911 services_controller.go:434] Service openshift-marketplace/marketplace-operator-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace ee1b3a20-644f-4c69-a038-1d53fcace871 4537 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00767e4b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.980913 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.980934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.980943 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.980954 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.980963 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.989867 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a1f66b-b867-40f1-9a95-85bfc3a9af0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.992984 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"adc9c92c-63cf-439c-8587-8eafa1c0384d\\\",\\\"systemUUID\\\":\\\"6ce3bfcf-cf26-46a6-add0-2b999cc5fad1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:40Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:40 crc kubenswrapper[4896]: E0126 15:35:40.993107 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.994379 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.994514 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.994629 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.994727 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:40 crc kubenswrapper[4896]: I0126 15:35:40.994820 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:40Z","lastTransitionTime":"2026-01-26T15:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.004480 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.015649 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.027340 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.044398 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.059810 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.073561 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:41Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.097656 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.097706 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.097718 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.097735 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.097746 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.201042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.201089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.201100 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.201120 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.201132 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.304222 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.304278 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.304295 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.304320 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.304337 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.407464 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.407532 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.407541 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.407610 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.407629 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.511358 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.511439 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.511457 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.511485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.511505 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.614891 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.615185 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.615263 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.615350 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.615458 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.719004 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.719099 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.719119 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.719142 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.719161 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.726182 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:47:35.341461287 +0000 UTC Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.758738 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:41 crc kubenswrapper[4896]: E0126 15:35:41.758958 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.822129 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.822188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.822207 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.822232 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.822250 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.925291 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.925368 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.925390 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.925423 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:41 crc kubenswrapper[4896]: I0126 15:35:41.925446 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:41Z","lastTransitionTime":"2026-01-26T15:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.028250 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.028302 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.028313 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.028332 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.028343 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.131106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.131177 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.131190 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.131206 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.131217 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.233873 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.233922 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.233934 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.233946 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.233955 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.336284 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.336335 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.336348 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.336366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.336378 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.439614 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.439667 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.439684 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.439707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.439725 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.542792 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.542836 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.542847 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.542864 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.542875 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.645670 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.645707 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.645718 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.645732 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.645742 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.727362 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:03:04.67990133 +0000 UTC Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.749023 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.749065 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.749074 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.749089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.749097 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.758459 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.758531 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:42 crc kubenswrapper[4896]: E0126 15:35:42.758672 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.758687 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:42 crc kubenswrapper[4896]: E0126 15:35:42.758848 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:42 crc kubenswrapper[4896]: E0126 15:35:42.758927 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.777901 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:26Z\\\",\\\"message\\\":\\\"ailed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:25Z is after 2025-08-24T17:21:41Z]\\\\nI0126 15:35:25.441568 6911 services_controller.go:434] Service openshift-marketplace/marketplace-operator-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace ee1b3a20-644f-4c69-a038-1d53fcace871 4537 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00767e4b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:35:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5jvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gdszn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.807744 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74d5aec-7734-46a5-b505-ced276677e9d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bb4b26156eebe104fa7d48c28ed4a08235b86559e08a00f0ad0309dbe50b33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cf245537deb1adf1d9428c2540f5d05fd11fc83b7bbb7e3d589ccbe72a403e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ad7e201f7fe5178266b227227936ded00706faac9aed3a761171442dde253a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a8a120178ee8138e55bda65d5961982be475b4869c84dd87ccbbb6323ce323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25fbe9a2849497daf60146732051caa58ee0bea6d8f1cc7c9997290c5e382c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://532e14883a0e6336a0dec0763ce9a7346d0b1e164cf66eb49d5d6213ca6f7458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a529aaa29ae21bdab2df567a1f2bff5e2e8273d5aa9c642907c999dcb077b1d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42feef0144651000175d410cbfa359bf193c633df74f174391b207e6f594ea9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.824681 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://888e118ba95f9e18734df91b182870684554ae1e715e117eb3c12d2229a919ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.838176 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89be9b4e464bc55d82f3a1ad5911e48bafd6841c1919cb6c81a1a5758f43e8e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.849917 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eae0e2b-9d04-4999-b78c-c70aeee09235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28317b792a293f783a15979c5a9d6acd520f15b8796087a49b0ed98f69a8921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlc2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nrqhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.852261 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.852287 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.852297 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.852312 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.852323 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.860603 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a1f66b-b867-40f1-9a95-85bfc3a9af0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8a1554a2edf53cb6ac26eb535f0ecf2557dfe251f6517f7aa8661283e6ad61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1571b82bdb2146ea567601eba84a682772c095b380beb40b1692fc4aa54ba492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.874501 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hw55b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17760139-6c26-4a89-a7ab-4e6a3d2cc516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cfe145d703f9d67a08ff728a5a585033b34d14d145b2bd70f79c02dc0950761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17e0d1135805dfda383be0dfda8e156ca0d1f49bb6d803dc6638a83f3eb5f22b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe31fd7b3d0feb7bc8b68fc4a534e516923d3e188c2a43c21ba8a1e8b8c153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a06c39f6f8348c607406d101afad346b3f73e321e500461c05ed84eb120bb8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d57c97d6f83e825996c6104fd0c2de6a2689e86d7cb65cc4c21318e38e0def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88c6a3a142bce6be6b809c3a4ae9a40f62fe0ef04eccd467b8730a699e04d11c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c680b2dfea72c258958e5d94ca7c2b4fac0b1a73fc3364e1cfdb31aa279f99e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9qnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hw55b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.884946 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bzzr5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76f90dd1-9706-47ef-b243-e24f185d0340\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://490b3a9d324e3b07e4dd8f017414406c4a86d87092c9b931813d8b3c8f4586ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hr2bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bzzr5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.894384 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fb40b0-5e6b-4d5d-b001-d5fde7ccf7f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66e16b4fdfc2afd884bb10a8365b77cd655a1838988e4d1efd3db6582375a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7acb4be352fbed65c91662337b76d78a598651bf312d91b40b1b40072ebeb926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hklx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9vpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.906319 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42ec8793-6e16-4368-84e3-9c3007499c92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.918504 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14000ba2479d1ec77f9f59b70d6d25df8bceef937950e7402df8a276502e60cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.927549 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-klrrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbeb890e-90af-4b15-a106-27b03465209f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-klrrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.937853 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.947123 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6scjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbe83ed-0fcd-48ca-b184-7c0fb7fda819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9045598fc712efd551a21223c28ddfb8e1eec08598019d90140992164802d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hmth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6scjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.955740 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.955790 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.955805 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.955824 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.955835 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:42Z","lastTransitionTime":"2026-01-26T15:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.958937 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nd8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c4023ce-9d03-491a-bbc6-d5afffb92f34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T15:35:13Z\\\",\\\"message\\\":\\\"2026-01-26T15:34:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32\\\\n2026-01-26T15:34:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84cb1828-70be-4ccd-b3ac-1713179b6e32 to /host/opt/cni/bin/\\\\n2026-01-26T15:34:28Z [verbose] multus-daemon started\\\\n2026-01-26T15:34:28Z [verbose] Readiness Indicator file check\\\\n2026-01-26T15:35:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nv4gq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nd8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.976108 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a110465b-91d9-4e70-ac2f-7e804c58b445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07566f6d2a52a9395b03e0b759a1caccf5eaff6a1c17488e536ccbb81abdf683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ef4ea94d232dd91ce5b11d7f70742155c2978217895faecdbd060d4eac503b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe26f12afeaf65aeadfc14051c732f0b408333e053d56510d2a5a64f4823bde1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:42 crc kubenswrapper[4896]: I0126 15:35:42.989868 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"006f90bb-2dfb-429d-922b-6c166bcd784c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1df0c37f97b6286fb28426cd8256db5ba87b97337962fa952ba3a5e8c9bf399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c061d1bfd5c72108933d5679a19f46b22ac255228f478eb91087c8dacf666cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b2b5ee1925b1757a952b907f462ef1a57ad4eb8d5c982cec773d9441734f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T15:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bda36b1477e471a7ccf49ca2d8d6e8ae8b1248b9ca0c9ebfadeddfc8361ce99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T15:34:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T15:34:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T15:34:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:42Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.003994 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.020066 4896 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T15:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T15:35:43Z is after 2025-08-24T17:21:41Z" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.058110 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.058146 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.058157 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.058174 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.058186 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.160874 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.160925 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.160937 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.160955 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.160966 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.264054 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.264114 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.264136 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.264165 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.264187 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.367026 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.367089 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.367104 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.367126 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.367142 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.420661 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:43 crc kubenswrapper[4896]: E0126 15:35:43.421212 4896 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:43 crc kubenswrapper[4896]: E0126 15:35:43.421341 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs podName:fbeb890e-90af-4b15-a106-27b03465209f nodeName:}" failed. No retries permitted until 2026-01-26 15:36:47.42131251 +0000 UTC m=+165.203192923 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs") pod "network-metrics-daemon-klrrb" (UID: "fbeb890e-90af-4b15-a106-27b03465209f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.469541 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.469611 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.469625 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.469642 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.469655 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.572150 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.572199 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.572212 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.572230 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.572242 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.674813 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.674874 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.674885 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.674899 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.674910 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.727495 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:00:42.40056297 +0000 UTC Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.758912 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:43 crc kubenswrapper[4896]: E0126 15:35:43.759103 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.777566 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.777642 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.777653 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.777667 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.777676 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.881022 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.881058 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.881070 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.881087 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.881100 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.982954 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.982989 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.983000 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.983015 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:43 crc kubenswrapper[4896]: I0126 15:35:43.983028 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:43Z","lastTransitionTime":"2026-01-26T15:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.084968 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.085013 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.085025 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.085039 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.085050 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.188072 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.188155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.188179 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.188209 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.188233 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.291757 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.291807 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.291816 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.291832 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.291846 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.394511 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.394564 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.394599 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.394621 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.394633 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.497295 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.497351 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.497368 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.497391 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.497407 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.600837 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.600909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.600926 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.600953 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.600975 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.703366 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.703433 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.703450 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.703471 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.703488 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.727899 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:25:37.813294472 +0000 UTC Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.758628 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.758724 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:44 crc kubenswrapper[4896]: E0126 15:35:44.758795 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.758834 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:44 crc kubenswrapper[4896]: E0126 15:35:44.759038 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:44 crc kubenswrapper[4896]: E0126 15:35:44.759111 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.806750 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.806817 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.806834 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.806858 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.806875 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.909428 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.909494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.909677 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.909721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:44 crc kubenswrapper[4896]: I0126 15:35:44.909737 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:44Z","lastTransitionTime":"2026-01-26T15:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.012539 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.012660 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.012688 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.012723 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.012747 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.115928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.116021 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.116040 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.116063 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.116083 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.218437 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.218506 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.218528 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.218558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.218620 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.321367 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.321451 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.321475 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.321506 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.321541 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.424638 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.424720 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.424741 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.424770 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.424798 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.527513 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.527558 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.527569 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.527606 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.527617 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.630038 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.630088 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.630098 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.630139 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.630151 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.728227 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:26:44.68578939 +0000 UTC Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.732408 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.732453 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.732470 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.732491 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.732506 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.759400 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:45 crc kubenswrapper[4896]: E0126 15:35:45.759829 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.834986 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.835031 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.835042 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.835056 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.835066 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.937859 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.937907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.937920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.937938 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:45 crc kubenswrapper[4896]: I0126 15:35:45.937950 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:45Z","lastTransitionTime":"2026-01-26T15:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.040155 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.040199 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.040211 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.040227 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.040238 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.142779 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.142839 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.142850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.142865 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.142880 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.245210 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.245251 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.245261 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.245277 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.245288 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.347890 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.347955 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.347976 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.348033 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.348049 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.450665 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.450721 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.450733 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.450750 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.450763 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.555886 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.555932 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.555946 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.555965 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.555981 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.658927 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.658973 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.658983 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.658996 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.659005 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.729330 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:06:34.217944836 +0000 UTC Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.759208 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.759335 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.759358 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:46 crc kubenswrapper[4896]: E0126 15:35:46.759518 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:46 crc kubenswrapper[4896]: E0126 15:35:46.759789 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:46 crc kubenswrapper[4896]: E0126 15:35:46.759892 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.760877 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.760916 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.760928 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.760944 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.760960 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.863562 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.863634 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.863650 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.863671 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.863686 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.966732 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.966804 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.966825 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.966849 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:46 crc kubenswrapper[4896]: I0126 15:35:46.966867 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:46Z","lastTransitionTime":"2026-01-26T15:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.069456 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.069499 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.069512 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.069529 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.069542 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.173459 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.173520 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.173536 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.173560 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.173604 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.277050 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.277096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.277106 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.277125 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.277137 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.380095 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.380148 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.380160 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.380182 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.380196 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.483308 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.483375 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.483393 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.483419 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.483441 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.586352 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.586405 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.586420 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.586438 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.586453 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.689744 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.689814 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.689823 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.689840 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.689850 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.730205 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:23:39.43399108 +0000 UTC Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.758691 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:47 crc kubenswrapper[4896]: E0126 15:35:47.758896 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.791768 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.791809 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.791826 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.791850 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.791867 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.896008 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.896053 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.896070 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.896090 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.896103 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.999446 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.999495 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:47 crc kubenswrapper[4896]: I0126 15:35:47.999510 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:47.999530 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:47.999543 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:47Z","lastTransitionTime":"2026-01-26T15:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.101853 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.101933 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.101973 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.102006 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.102031 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.204175 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.204220 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.204239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.204270 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.204286 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.306827 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.306920 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.306938 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.306960 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.306980 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.409484 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.409561 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.409616 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.409646 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.409669 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.512878 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.512927 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.512938 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.512953 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.512961 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.615096 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.615144 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.615156 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.615176 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.615190 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.718351 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.718394 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.718476 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.718527 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.718549 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.730910 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:31:23.56018446 +0000 UTC Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.758601 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.758647 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:48 crc kubenswrapper[4896]: E0126 15:35:48.758753 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.758838 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:48 crc kubenswrapper[4896]: E0126 15:35:48.758855 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:48 crc kubenswrapper[4896]: E0126 15:35:48.759008 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.821780 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.822384 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.822645 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.822880 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.823095 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.926982 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.927045 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.927085 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.927116 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:48 crc kubenswrapper[4896]: I0126 15:35:48.927136 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:48Z","lastTransitionTime":"2026-01-26T15:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.030102 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.030485 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.030503 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.030522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.030535 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.133941 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.133994 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.134010 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.134034 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.134051 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.237183 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.237217 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.237227 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.237242 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.237253 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.339873 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.339915 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.339927 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.339942 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.339952 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.442728 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.442788 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.442799 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.442825 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.442837 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.545714 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.545761 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.545770 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.545785 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.545795 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.648188 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.648239 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.648256 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.648275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.648287 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.731640 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:48:10.350854065 +0000 UTC Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.752108 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.752179 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.752197 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.752248 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.752268 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.758353 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:49 crc kubenswrapper[4896]: E0126 15:35:49.758462 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.855349 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.855390 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.855399 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.855412 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.855420 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.957772 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.957817 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.957827 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.957845 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:49 crc kubenswrapper[4896]: I0126 15:35:49.957857 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:49Z","lastTransitionTime":"2026-01-26T15:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.060708 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.060766 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.060781 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.060804 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.060817 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.163860 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.163909 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.163922 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.163940 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.163954 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.266435 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.266494 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.266511 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.266538 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.266556 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.369461 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.369530 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.369551 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.369619 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.369642 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.472000 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.472088 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.472110 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.472138 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.472163 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.575442 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.575481 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.575490 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.575504 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.575513 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.678222 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.678275 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.678286 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.678300 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.678310 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.732321 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:42:01.338233157 +0000 UTC Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.758888 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.759015 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:50 crc kubenswrapper[4896]: E0126 15:35:50.759169 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:50 crc kubenswrapper[4896]: E0126 15:35:50.759783 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.760005 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:50 crc kubenswrapper[4896]: E0126 15:35:50.760114 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.780665 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.780697 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.780706 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.780722 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.780732 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.884086 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.884142 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.884156 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.884174 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.884186 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.986846 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.986881 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.986890 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.986907 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:50 crc kubenswrapper[4896]: I0126 15:35:50.986920 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:50Z","lastTransitionTime":"2026-01-26T15:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.089522 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.089555 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.089568 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.089601 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.089612 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:51Z","lastTransitionTime":"2026-01-26T15:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.134890 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.134960 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.134984 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.135017 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.135040 4896 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T15:35:51Z","lastTransitionTime":"2026-01-26T15:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.201443 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk"] Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.202066 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.206232 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.207989 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.208272 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.208326 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.250699 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.250667434 podStartE2EDuration="1m28.250667434s" podCreationTimestamp="2026-01-26 15:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.231097879 +0000 UTC m=+109.012978312" watchObservedRunningTime="2026-01-26 15:35:51.250667434 +0000 UTC m=+109.032547877" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.251050 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.251040903 podStartE2EDuration="56.251040903s" podCreationTimestamp="2026-01-26 15:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.248385737 +0000 UTC m=+109.030266170" watchObservedRunningTime="2026-01-26 15:35:51.251040903 +0000 UTC m=+109.032921336" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.303148 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.303199 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3fb918e-5ddb-400c-81f5-0339c56980b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.303225 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.303252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3fb918e-5ddb-400c-81f5-0339c56980b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.303292 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fb918e-5ddb-400c-81f5-0339c56980b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.328159 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6scjz" podStartSLOduration=87.328138163 podStartE2EDuration="1m27.328138163s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.313622184 +0000 UTC m=+109.095502577" watchObservedRunningTime="2026-01-26 15:35:51.328138163 +0000 UTC m=+109.110018556" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.336828 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9nd8b" podStartSLOduration=87.336807978 podStartE2EDuration="1m27.336807978s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.327872447 +0000 UTC m=+109.109752840" watchObservedRunningTime="2026-01-26 15:35:51.336807978 +0000 UTC m=+109.118688391" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.365679 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.365661563 podStartE2EDuration="21.365661563s" podCreationTimestamp="2026-01-26 15:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.364792051 +0000 UTC m=+109.146672474" watchObservedRunningTime="2026-01-26 15:35:51.365661563 +0000 UTC m=+109.147541946" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.401514 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podStartSLOduration=87.401498031 podStartE2EDuration="1m27.401498031s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.400516817 +0000 UTC m=+109.182397230" watchObservedRunningTime="2026-01-26 15:35:51.401498031 +0000 UTC m=+109.183378424" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.403853 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.403912 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3fb918e-5ddb-400c-81f5-0339c56980b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.403981 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.404007 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.404037 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3fb918e-5ddb-400c-81f5-0339c56980b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.404075 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c3fb918e-5ddb-400c-81f5-0339c56980b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.404162 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fb918e-5ddb-400c-81f5-0339c56980b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.405026 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3fb918e-5ddb-400c-81f5-0339c56980b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.417911 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fb918e-5ddb-400c-81f5-0339c56980b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.427898 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3fb918e-5ddb-400c-81f5-0339c56980b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pcdvk\" (UID: \"c3fb918e-5ddb-400c-81f5-0339c56980b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.461965 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.461946769 podStartE2EDuration="28.461946769s" podCreationTimestamp="2026-01-26 15:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.438211361 +0000 UTC m=+109.220091764" watchObservedRunningTime="2026-01-26 15:35:51.461946769 +0000 UTC m=+109.243827162" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.462146 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hw55b" podStartSLOduration=87.462140764 podStartE2EDuration="1m27.462140764s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.461438406 +0000 UTC m=+109.243318819" watchObservedRunningTime="2026-01-26 15:35:51.462140764 +0000 UTC m=+109.244021157" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.472193 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bzzr5" podStartSLOduration=87.472171712 podStartE2EDuration="1m27.472171712s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.471808654 +0000 UTC m=+109.253689057" watchObservedRunningTime="2026-01-26 15:35:51.472171712 +0000 UTC m=+109.254052105" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.485136 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9vpq" podStartSLOduration=86.485119253 podStartE2EDuration="1m26.485119253s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.484130958 +0000 UTC m=+109.266011361" watchObservedRunningTime="2026-01-26 15:35:51.485119253 +0000 UTC m=+109.266999656" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.500059 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.500039063 podStartE2EDuration="1m29.500039063s" podCreationTimestamp="2026-01-26 15:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:51.499225742 +0000 UTC m=+109.281106145" watchObservedRunningTime="2026-01-26 15:35:51.500039063 +0000 UTC m=+109.281919456" Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.531668 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" Jan 26 15:35:51 crc kubenswrapper[4896]: W0126 15:35:51.545943 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fb918e_5ddb_400c_81f5_0339c56980b9.slice/crio-5b9e612b0f9d6959f7844aaddd1f79310e7e74f6d8a44855446764ef26296eca WatchSource:0}: Error finding container 5b9e612b0f9d6959f7844aaddd1f79310e7e74f6d8a44855446764ef26296eca: Status 404 returned error can't find the container with id 5b9e612b0f9d6959f7844aaddd1f79310e7e74f6d8a44855446764ef26296eca Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.732987 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:05:08.827066773 +0000 UTC Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.733060 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.740462 4896 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 15:35:51 crc kubenswrapper[4896]: I0126 15:35:51.758605 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:51 crc kubenswrapper[4896]: E0126 15:35:51.758716 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.346549 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" event={"ID":"c3fb918e-5ddb-400c-81f5-0339c56980b9","Type":"ContainerStarted","Data":"ca7c52d2aa5f4d4acd4e1578e9321814acf52e2d1eef6251537bbd4a96716af7"} Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.346651 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" event={"ID":"c3fb918e-5ddb-400c-81f5-0339c56980b9","Type":"ContainerStarted","Data":"5b9e612b0f9d6959f7844aaddd1f79310e7e74f6d8a44855446764ef26296eca"} Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.362551 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pcdvk" podStartSLOduration=88.362528784 podStartE2EDuration="1m28.362528784s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:35:52.361481997 +0000 UTC m=+110.143362430" watchObservedRunningTime="2026-01-26 15:35:52.362528784 +0000 UTC m=+110.144409187" Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.759206 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.759174 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:52 crc kubenswrapper[4896]: I0126 15:35:52.759260 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:52 crc kubenswrapper[4896]: E0126 15:35:52.760461 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:52 crc kubenswrapper[4896]: E0126 15:35:52.760642 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:52 crc kubenswrapper[4896]: E0126 15:35:52.760795 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:53 crc kubenswrapper[4896]: I0126 15:35:53.759328 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:53 crc kubenswrapper[4896]: E0126 15:35:53.759622 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:54 crc kubenswrapper[4896]: I0126 15:35:54.758819 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:54 crc kubenswrapper[4896]: I0126 15:35:54.758866 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:54 crc kubenswrapper[4896]: E0126 15:35:54.759040 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:54 crc kubenswrapper[4896]: I0126 15:35:54.759138 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:54 crc kubenswrapper[4896]: E0126 15:35:54.759352 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:54 crc kubenswrapper[4896]: E0126 15:35:54.759515 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:55 crc kubenswrapper[4896]: I0126 15:35:55.758719 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:55 crc kubenswrapper[4896]: E0126 15:35:55.759882 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:55 crc kubenswrapper[4896]: I0126 15:35:55.760368 4896 scope.go:117] "RemoveContainer" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" Jan 26 15:35:55 crc kubenswrapper[4896]: E0126 15:35:55.760649 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gdszn_openshift-ovn-kubernetes(e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" Jan 26 15:35:56 crc kubenswrapper[4896]: I0126 15:35:56.758590 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:56 crc kubenswrapper[4896]: I0126 15:35:56.758639 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:56 crc kubenswrapper[4896]: E0126 15:35:56.758722 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:56 crc kubenswrapper[4896]: I0126 15:35:56.758775 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:56 crc kubenswrapper[4896]: E0126 15:35:56.758928 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:56 crc kubenswrapper[4896]: E0126 15:35:56.759026 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:57 crc kubenswrapper[4896]: I0126 15:35:57.758630 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:57 crc kubenswrapper[4896]: E0126 15:35:57.758781 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:35:58 crc kubenswrapper[4896]: I0126 15:35:58.759045 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:35:58 crc kubenswrapper[4896]: I0126 15:35:58.759049 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:35:58 crc kubenswrapper[4896]: E0126 15:35:58.759497 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:35:58 crc kubenswrapper[4896]: E0126 15:35:58.759633 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:35:58 crc kubenswrapper[4896]: I0126 15:35:58.759055 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:35:58 crc kubenswrapper[4896]: E0126 15:35:58.759741 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:35:59 crc kubenswrapper[4896]: I0126 15:35:59.759223 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:35:59 crc kubenswrapper[4896]: E0126 15:35:59.759446 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:00 crc kubenswrapper[4896]: I0126 15:36:00.758307 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:00 crc kubenswrapper[4896]: I0126 15:36:00.758372 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:00 crc kubenswrapper[4896]: I0126 15:36:00.758749 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:00 crc kubenswrapper[4896]: E0126 15:36:00.758740 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:00 crc kubenswrapper[4896]: E0126 15:36:00.758857 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:00 crc kubenswrapper[4896]: E0126 15:36:00.758977 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:01 crc kubenswrapper[4896]: I0126 15:36:01.758511 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:01 crc kubenswrapper[4896]: E0126 15:36:01.758734 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.379712 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/1.log" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.380437 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/0.log" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.380496 4896 generic.go:334] "Generic (PLEG): container finished" podID="8c4023ce-9d03-491a-bbc6-d5afffb92f34" containerID="a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b" exitCode=1 Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.380530 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerDied","Data":"a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b"} Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.380564 4896 scope.go:117] "RemoveContainer" containerID="b5d897bdfadb589d224a8832ee5e76309be4d623122e94eb88a240bfd2362bed" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.381218 4896 scope.go:117] "RemoveContainer" containerID="a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b" Jan 26 15:36:02 crc kubenswrapper[4896]: E0126 15:36:02.381520 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9nd8b_openshift-multus(8c4023ce-9d03-491a-bbc6-d5afffb92f34)\"" pod="openshift-multus/multus-9nd8b" podUID="8c4023ce-9d03-491a-bbc6-d5afffb92f34" Jan 26 15:36:02 crc kubenswrapper[4896]: E0126 15:36:02.631073 4896 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.759180 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.759178 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:02 crc kubenswrapper[4896]: E0126 15:36:02.761447 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:02 crc kubenswrapper[4896]: I0126 15:36:02.761494 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:02 crc kubenswrapper[4896]: E0126 15:36:02.761781 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:02 crc kubenswrapper[4896]: E0126 15:36:02.761949 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:03 crc kubenswrapper[4896]: I0126 15:36:03.386465 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/1.log" Jan 26 15:36:03 crc kubenswrapper[4896]: E0126 15:36:03.540813 4896 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:36:03 crc kubenswrapper[4896]: I0126 15:36:03.758831 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:03 crc kubenswrapper[4896]: E0126 15:36:03.759098 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:04 crc kubenswrapper[4896]: I0126 15:36:04.758692 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:04 crc kubenswrapper[4896]: I0126 15:36:04.758787 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:04 crc kubenswrapper[4896]: I0126 15:36:04.758791 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:04 crc kubenswrapper[4896]: E0126 15:36:04.758895 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:04 crc kubenswrapper[4896]: E0126 15:36:04.759058 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:04 crc kubenswrapper[4896]: E0126 15:36:04.759163 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:05 crc kubenswrapper[4896]: I0126 15:36:05.759138 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:05 crc kubenswrapper[4896]: E0126 15:36:05.759334 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:06 crc kubenswrapper[4896]: I0126 15:36:06.759155 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:06 crc kubenswrapper[4896]: I0126 15:36:06.768062 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:06 crc kubenswrapper[4896]: E0126 15:36:06.768247 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:06 crc kubenswrapper[4896]: I0126 15:36:06.768351 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:06 crc kubenswrapper[4896]: E0126 15:36:06.768960 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:06 crc kubenswrapper[4896]: E0126 15:36:06.768767 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:07 crc kubenswrapper[4896]: I0126 15:36:07.758995 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:07 crc kubenswrapper[4896]: E0126 15:36:07.759571 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:07 crc kubenswrapper[4896]: I0126 15:36:07.760053 4896 scope.go:117] "RemoveContainer" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.409624 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/3.log" Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.414508 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerStarted","Data":"e44e909f11df7d386f4426e644f72e40396ab0c1f0135682fa60da8c9dc8468f"} Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.414977 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:36:08 crc kubenswrapper[4896]: E0126 15:36:08.542962 4896 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.758821 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.758976 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:08 crc kubenswrapper[4896]: I0126 15:36:08.759025 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:08 crc kubenswrapper[4896]: E0126 15:36:08.759373 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:08 crc kubenswrapper[4896]: E0126 15:36:08.759435 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:08 crc kubenswrapper[4896]: E0126 15:36:08.759957 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:09 crc kubenswrapper[4896]: I0126 15:36:09.758768 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:09 crc kubenswrapper[4896]: E0126 15:36:09.758837 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:09 crc kubenswrapper[4896]: I0126 15:36:09.759123 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podStartSLOduration=105.759111009 podStartE2EDuration="1m45.759111009s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:08.443431769 +0000 UTC m=+126.225312202" watchObservedRunningTime="2026-01-26 15:36:09.759111009 +0000 UTC m=+127.540991392" Jan 26 15:36:09 crc kubenswrapper[4896]: I0126 15:36:09.759787 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-klrrb"] Jan 26 15:36:09 crc kubenswrapper[4896]: I0126 15:36:09.759843 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:09 crc kubenswrapper[4896]: E0126 15:36:09.759910 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:10 crc kubenswrapper[4896]: I0126 15:36:10.759142 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:10 crc kubenswrapper[4896]: I0126 15:36:10.759233 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:10 crc kubenswrapper[4896]: E0126 15:36:10.759287 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:10 crc kubenswrapper[4896]: E0126 15:36:10.759400 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:11 crc kubenswrapper[4896]: I0126 15:36:11.758466 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:11 crc kubenswrapper[4896]: I0126 15:36:11.758466 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:11 crc kubenswrapper[4896]: E0126 15:36:11.758662 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:11 crc kubenswrapper[4896]: E0126 15:36:11.758804 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:12 crc kubenswrapper[4896]: I0126 15:36:12.758494 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:12 crc kubenswrapper[4896]: I0126 15:36:12.758572 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:12 crc kubenswrapper[4896]: E0126 15:36:12.760009 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:12 crc kubenswrapper[4896]: E0126 15:36:12.760235 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:13 crc kubenswrapper[4896]: E0126 15:36:13.543625 4896 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:36:13 crc kubenswrapper[4896]: I0126 15:36:13.759001 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:13 crc kubenswrapper[4896]: I0126 15:36:13.759033 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:13 crc kubenswrapper[4896]: E0126 15:36:13.759251 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:13 crc kubenswrapper[4896]: E0126 15:36:13.759340 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:14 crc kubenswrapper[4896]: I0126 15:36:14.758818 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:14 crc kubenswrapper[4896]: I0126 15:36:14.758846 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:14 crc kubenswrapper[4896]: E0126 15:36:14.759005 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:14 crc kubenswrapper[4896]: E0126 15:36:14.759110 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:15 crc kubenswrapper[4896]: I0126 15:36:15.759191 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:15 crc kubenswrapper[4896]: I0126 15:36:15.759208 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:15 crc kubenswrapper[4896]: E0126 15:36:15.759536 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:15 crc kubenswrapper[4896]: E0126 15:36:15.759807 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:15 crc kubenswrapper[4896]: I0126 15:36:15.759828 4896 scope.go:117] "RemoveContainer" containerID="a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b" Jan 26 15:36:16 crc kubenswrapper[4896]: I0126 15:36:16.444655 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/1.log" Jan 26 15:36:16 crc kubenswrapper[4896]: I0126 15:36:16.445047 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerStarted","Data":"a2cf36ac3c72179e799a5212ae24d33ce99cd4f0f8a6e255eabc6bb2e8182ab6"} Jan 26 15:36:16 crc kubenswrapper[4896]: I0126 15:36:16.759237 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:16 crc kubenswrapper[4896]: I0126 15:36:16.759335 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:16 crc kubenswrapper[4896]: E0126 15:36:16.759387 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:16 crc kubenswrapper[4896]: E0126 15:36:16.759484 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:17 crc kubenswrapper[4896]: I0126 15:36:17.758694 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:17 crc kubenswrapper[4896]: I0126 15:36:17.758747 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:17 crc kubenswrapper[4896]: E0126 15:36:17.758813 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:17 crc kubenswrapper[4896]: E0126 15:36:17.758902 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:18 crc kubenswrapper[4896]: E0126 15:36:18.545448 4896 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:36:18 crc kubenswrapper[4896]: I0126 15:36:18.759153 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:18 crc kubenswrapper[4896]: I0126 15:36:18.759215 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:18 crc kubenswrapper[4896]: E0126 15:36:18.759311 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:18 crc kubenswrapper[4896]: E0126 15:36:18.759424 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:19 crc kubenswrapper[4896]: I0126 15:36:19.758790 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:19 crc kubenswrapper[4896]: E0126 15:36:19.758969 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:19 crc kubenswrapper[4896]: I0126 15:36:19.758810 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:19 crc kubenswrapper[4896]: E0126 15:36:19.759301 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:20 crc kubenswrapper[4896]: I0126 15:36:20.758677 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:20 crc kubenswrapper[4896]: E0126 15:36:20.758853 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:20 crc kubenswrapper[4896]: I0126 15:36:20.758926 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:20 crc kubenswrapper[4896]: E0126 15:36:20.759118 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:21 crc kubenswrapper[4896]: I0126 15:36:21.758821 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:21 crc kubenswrapper[4896]: I0126 15:36:21.758857 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:21 crc kubenswrapper[4896]: E0126 15:36:21.758945 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 15:36:21 crc kubenswrapper[4896]: E0126 15:36:21.759061 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-klrrb" podUID="fbeb890e-90af-4b15-a106-27b03465209f" Jan 26 15:36:22 crc kubenswrapper[4896]: I0126 15:36:22.758467 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:22 crc kubenswrapper[4896]: E0126 15:36:22.760504 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 15:36:22 crc kubenswrapper[4896]: I0126 15:36:22.760545 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:22 crc kubenswrapper[4896]: E0126 15:36:22.760801 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.758551 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.758561 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.760773 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.760891 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.761322 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 15:36:23 crc kubenswrapper[4896]: I0126 15:36:23.761696 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 15:36:24 crc kubenswrapper[4896]: I0126 15:36:24.758886 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:24 crc kubenswrapper[4896]: I0126 15:36:24.758971 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:24 crc kubenswrapper[4896]: I0126 15:36:24.761865 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 15:36:24 crc kubenswrapper[4896]: I0126 15:36:24.762397 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.446857 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:31 crc kubenswrapper[4896]: E0126 15:36:31.447059 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:38:33.447039383 +0000 UTC m=+271.228919776 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.548283 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.548330 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.548350 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.548379 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.549641 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.556068 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.556171 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.556987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.579765 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.681604 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 15:36:31 crc kubenswrapper[4896]: I0126 15:36:31.681885 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 15:36:31 crc kubenswrapper[4896]: W0126 15:36:31.889591 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f86d20e357c0e99de3f99a56348abed527163b57dd86318f431897991eb719b8 WatchSource:0}: Error finding container f86d20e357c0e99de3f99a56348abed527163b57dd86318f431897991eb719b8: Status 404 returned error can't find the container with id f86d20e357c0e99de3f99a56348abed527163b57dd86318f431897991eb719b8 Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.091793 4896 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.137348 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.138013 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gb6wx"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.138223 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.138671 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.139165 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-66hlb"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.139960 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.140157 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.140780 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.141218 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.142076 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.142300 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h47xb"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.142792 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.149146 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.149209 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.150248 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.151685 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.151908 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.152728 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.153327 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8n8h"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.154351 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.162481 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.162918 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.163831 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.164025 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.164572 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.165808 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.166241 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.172498 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.174289 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.175627 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.183800 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.184092 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.184205 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.184414 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.184461 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.185810 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.199558 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.199850 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.201270 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.201782 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.201857 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202182 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202405 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202553 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202704 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202848 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202911 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.202867 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203259 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203457 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203672 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203786 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203939 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.204039 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.204179 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203981 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.204456 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.203685 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.204735 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.204881 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.205075 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.205404 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.205591 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.205761 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.205972 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.206455 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.206939 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.207257 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rbmml"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.207548 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.207653 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.207550 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.209641 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.209859 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.210426 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.211051 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.211328 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.211524 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.211732 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.211903 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212078 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212247 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212385 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212521 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212673 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.212901 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213033 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213149 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213242 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213497 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213660 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213788 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213815 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213900 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213994 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.214041 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.213170 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.214214 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.220816 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.221003 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.226480 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.228730 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.232632 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.233350 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.236625 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237055 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237268 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237443 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237628 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237812 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.237956 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.238072 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.238130 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ldjd"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.238664 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.240005 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.240178 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.240375 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.252841 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.252951 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253161 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253172 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253455 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253568 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253639 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253225 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253742 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253767 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253819 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.253862 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.254063 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.263718 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.264875 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.270775 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rhdrn"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.272274 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.367288 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-service-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.369669 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.369972 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.371111 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.371179 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.373634 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df72ba3-ab2b-4500-a44f-cc77c9771c33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.373692 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-serving-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.372155 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.381603 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmp7\" (UniqueName: \"kubernetes.io/projected/0752f58c-f532-48fb-b192-30c2f8614059-kube-api-access-rcmp7\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.381688 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-images\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.381903 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.381963 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-machine-approver-tls\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.381987 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/930ffea7-a937-407b-ae73-9b22885a6aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382011 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-config\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382032 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-metrics-tls\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382054 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df72ba3-ab2b-4500-a44f-cc77c9771c33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382078 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382102 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382129 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkp7\" (UniqueName: \"kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382155 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0752f58c-f532-48fb-b192-30c2f8614059-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382181 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x224r\" (UniqueName: \"kubernetes.io/projected/7df72ba3-ab2b-4500-a44f-cc77c9771c33-kube-api-access-x224r\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382203 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit-dir\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382224 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8757f648-a97f-4590-a332-6ee3bb30fa52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382255 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtlbr\" (UniqueName: \"kubernetes.io/projected/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-kube-api-access-gtlbr\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382278 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-client\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382298 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382326 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757f648-a97f-4590-a332-6ee3bb30fa52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382351 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-image-import-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382376 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp57h\" (UniqueName: \"kubernetes.io/projected/1399e6aa-14ba-40e6-aec2-b3268e5b7102-kube-api-access-wp57h\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382401 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-encryption-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382423 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382443 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382527 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82nh\" (UniqueName: \"kubernetes.io/projected/930ffea7-a937-407b-ae73-9b22885a6aad-kube-api-access-k82nh\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382625 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551d129e-dcc5-4e55-89d1-68607191e923-serving-cert\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382655 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930ffea7-a937-407b-ae73-9b22885a6aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382707 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdl9\" (UniqueName: \"kubernetes.io/projected/8757f648-a97f-4590-a332-6ee3bb30fa52-kube-api-access-vsdl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382733 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382771 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v5hj\" (UniqueName: \"kubernetes.io/projected/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-kube-api-access-5v5hj\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382796 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382849 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-node-pullsecrets\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382876 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-serving-cert\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382899 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-config\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382946 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29vl8\" (UniqueName: \"kubernetes.io/projected/551d129e-dcc5-4e55-89d1-68607191e923-kube-api-access-29vl8\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.382983 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-auth-proxy-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.383008 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.383060 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.383760 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.383908 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.384512 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.385897 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.386542 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.386726 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.388972 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.389126 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.389265 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.389401 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.389600 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.393385 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.397963 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.400121 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.400533 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.400915 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.401223 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.401260 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.402236 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.402763 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.404766 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.412355 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.413096 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.414696 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw5mq"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.415220 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.415796 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ms78m"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.416104 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.418486 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.418905 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.420093 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.421970 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.422292 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.424030 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.426411 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.428388 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.430825 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.430931 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.431644 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w474z"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.431765 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.432171 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.432991 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.433674 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.437565 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gl252"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.438428 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.438707 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.439963 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.445043 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.445590 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.446388 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h47xb"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.457065 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.466067 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-66hlb"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.469306 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.471024 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.475535 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.477490 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.482661 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.483081 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484098 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484158 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-machine-approver-tls\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484195 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/930ffea7-a937-407b-ae73-9b22885a6aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484224 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484250 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-config\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484279 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484306 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-metrics-tls\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6793088e-6a2c-4abf-be95-b686e3904d1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484360 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df72ba3-ab2b-4500-a44f-cc77c9771c33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484388 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484417 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484441 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkp7\" (UniqueName: \"kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484466 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0752f58c-f532-48fb-b192-30c2f8614059-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484489 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-images\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484517 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x224r\" (UniqueName: \"kubernetes.io/projected/7df72ba3-ab2b-4500-a44f-cc77c9771c33-kube-api-access-x224r\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit-dir\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484621 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8757f648-a97f-4590-a332-6ee3bb30fa52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484648 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtlbr\" (UniqueName: \"kubernetes.io/projected/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-kube-api-access-gtlbr\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484674 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-client\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484697 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484725 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-image-import-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484749 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757f648-a97f-4590-a332-6ee3bb30fa52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484772 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp57h\" (UniqueName: \"kubernetes.io/projected/1399e6aa-14ba-40e6-aec2-b3268e5b7102-kube-api-access-wp57h\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484801 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2zl\" (UniqueName: \"kubernetes.io/projected/51353d35-53fc-4769-9870-6598a2df021c-kube-api-access-7m2zl\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484828 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484853 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484879 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k82nh\" (UniqueName: \"kubernetes.io/projected/930ffea7-a937-407b-ae73-9b22885a6aad-kube-api-access-k82nh\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484903 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-encryption-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484937 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551d129e-dcc5-4e55-89d1-68607191e923-serving-cert\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484961 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930ffea7-a937-407b-ae73-9b22885a6aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.484988 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6793088e-6a2c-4abf-be95-b686e3904d1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485017 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485046 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdl9\" (UniqueName: \"kubernetes.io/projected/8757f648-a97f-4590-a332-6ee3bb30fa52-kube-api-access-vsdl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485074 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v5hj\" (UniqueName: \"kubernetes.io/projected/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-kube-api-access-5v5hj\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485123 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485152 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-node-pullsecrets\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485179 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-serving-cert\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485208 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-config\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485239 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29vl8\" (UniqueName: \"kubernetes.io/projected/551d129e-dcc5-4e55-89d1-68607191e923-kube-api-access-29vl8\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-auth-proxy-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485292 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485317 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrj6\" (UniqueName: \"kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485345 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485370 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-service-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485395 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df72ba3-ab2b-4500-a44f-cc77c9771c33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485421 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-serving-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485452 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmp7\" (UniqueName: \"kubernetes.io/projected/0752f58c-f532-48fb-b192-30c2f8614059-kube-api-access-rcmp7\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485481 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjrh\" (UniqueName: \"kubernetes.io/projected/f98717ba-8501-4233-9959-efbd73aabbd9-kube-api-access-bhjrh\") pod \"migrator-59844c95c7-bnntw\" (UID: \"f98717ba-8501-4233-9959-efbd73aabbd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485505 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6793088e-6a2c-4abf-be95-b686e3904d1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485533 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51353d35-53fc-4769-9870-6598a2df021c-proxy-tls\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485601 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.485627 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-images\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.486898 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-images\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.487705 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gb6wx"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.487961 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-node-pullsecrets\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.488551 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-service-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.489131 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df72ba3-ab2b-4500-a44f-cc77c9771c33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.489179 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df72ba3-ab2b-4500-a44f-cc77c9771c33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.489243 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-serving-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.489746 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.490183 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.489915 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit-dir\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.490345 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-auth-proxy-config\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.490509 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.490541 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.491276 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/930ffea7-a937-407b-ae73-9b22885a6aad-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.491757 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0752f58c-f532-48fb-b192-30c2f8614059-config\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.492413 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n6zp9"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.492549 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-metrics-tls\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.493182 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8757f648-a97f-4590-a332-6ee3bb30fa52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.493276 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.493310 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.493417 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.494065 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-encryption-config\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.494347 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551d129e-dcc5-4e55-89d1-68607191e923-config\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.494771 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-image-import-ca\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.495551 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1399e6aa-14ba-40e6-aec2-b3268e5b7102-audit\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.495620 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8757f648-a97f-4590-a332-6ee3bb30fa52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.496129 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.496621 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.497131 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.497492 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551d129e-dcc5-4e55-89d1-68607191e923-serving-cert\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.497637 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-serving-cert\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.499293 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0752f58c-f532-48fb-b192-30c2f8614059-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.500343 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-machine-approver-tls\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.500491 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.502840 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8n8h"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.504031 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"896eedca37ef8f7c9a687c7726dab1aed61dac7ca70ded614dcb36cb46a6cccc"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.504089 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f86d20e357c0e99de3f99a56348abed527163b57dd86318f431897991eb719b8"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.505055 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.505674 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gl252"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.506150 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.506488 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"61f597574428167a1e76ac9620807c5d9a2b8645f9faecc9f65bd3b0321f1f0a"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.506525 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"295a7664f4b8916508e1bf84124af7a000626f6844eff72f55bc506e0cee34f3"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.506894 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930ffea7-a937-407b-ae73-9b22885a6aad-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.509448 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5f2d9f69250652b62d93a5dfe7c69d62495f3bed08679fb34004bbc33d885c9b"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.509510 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3d2fce5480f0e33847c04b8c73409cb85261694b2d7de609d9463436c124a069"} Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.510296 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.517044 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.568358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1399e6aa-14ba-40e6-aec2-b3268e5b7102-etcd-client\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.568723 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.569758 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ldjd"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.570748 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw5mq"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.571815 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rbmml"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.572911 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.573887 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.574866 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rhdrn"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.575841 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.576809 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.577845 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.580117 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.580305 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.580324 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.580790 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.580990 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.582183 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.583008 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.587596 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bhztt"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.587666 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2zl\" (UniqueName: \"kubernetes.io/projected/51353d35-53fc-4769-9870-6598a2df021c-kube-api-access-7m2zl\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.587756 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6793088e-6a2c-4abf-be95-b686e3904d1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.587880 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588249 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrj6\" (UniqueName: \"kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588298 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588369 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjrh\" (UniqueName: \"kubernetes.io/projected/f98717ba-8501-4233-9959-efbd73aabbd9-kube-api-access-bhjrh\") pod \"migrator-59844c95c7-bnntw\" (UID: \"f98717ba-8501-4233-9959-efbd73aabbd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588429 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6793088e-6a2c-4abf-be95-b686e3904d1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51353d35-53fc-4769-9870-6598a2df021c-proxy-tls\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588517 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588838 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588873 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.588941 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6793088e-6a2c-4abf-be95-b686e3904d1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.589022 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-images\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.589107 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.590593 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.591255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.591425 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.591393 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6793088e-6a2c-4abf-be95-b686e3904d1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.592197 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51353d35-53fc-4769-9870-6598a2df021c-images\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.593614 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.593927 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.594288 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.600007 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.600432 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.601352 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.601625 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n6zp9"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.601719 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.601803 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6793088e-6a2c-4abf-be95-b686e3904d1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.602489 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51353d35-53fc-4769-9870-6598a2df021c-proxy-tls\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.602618 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.603251 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.606085 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.607388 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.608451 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.609664 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ld6jq"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.610910 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.612416 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w474z"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.613559 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.614886 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bhztt"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.616179 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7n4vw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.617079 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.617273 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7n4vw"] Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.621100 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.645921 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.660056 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.690929 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-service-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691048 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691094 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7032c8a7-9079-4063-bc82-a621052567ba-config\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691161 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-config\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691188 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-trusted-ca\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691216 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691243 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691282 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-serving-cert\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691315 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eced78-200a-47c3-b4d3-ea5be6867022-config\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691342 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-serving-cert\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691364 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691380 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eced78-200a-47c3-b4d3-ea5be6867022-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691408 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691450 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691481 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7032c8a7-9079-4063-bc82-a621052567ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691587 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s88f9\" (UniqueName: \"kubernetes.io/projected/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-kube-api-access-s88f9\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691626 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnvg\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691644 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7032c8a7-9079-4063-bc82-a621052567ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691667 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-config\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691706 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691723 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb876\" (UniqueName: \"kubernetes.io/projected/a005fba8-0843-41a6-90eb-67a2aa6d0580-kube-api-access-wb876\") pod \"downloads-7954f5f757-rbmml\" (UID: \"a005fba8-0843-41a6-90eb-67a2aa6d0580\") " pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691747 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-client\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691793 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eced78-200a-47c3-b4d3-ea5be6867022-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691833 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7f4k\" (UniqueName: \"kubernetes.io/projected/5a7680b4-5a24-4bda-a8af-6d4d3949b969-kube-api-access-n7f4k\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.691870 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: E0126 15:36:32.694633 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.194620296 +0000 UTC m=+150.976500689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.700921 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.721315 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.741640 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.760439 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.780173 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792188 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792374 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792426 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-webhook-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792499 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792557 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792607 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792627 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p2c\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-kube-api-access-m7p2c\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792652 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-tmpfs\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792676 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792695 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eced78-200a-47c3-b4d3-ea5be6867022-config\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792714 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-plugins-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792730 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6g2\" (UniqueName: \"kubernetes.io/projected/5aecf14a-cf97-41d8-b037-58f39a0a19bf-kube-api-access-kn6g2\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792747 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8s6\" (UniqueName: \"kubernetes.io/projected/7ce1a770-33b9-4639-bc56-b93bf8627884-kube-api-access-fd8s6\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792767 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792786 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: E0126 15:36:32.792815 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.292787594 +0000 UTC m=+151.074668077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792872 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-registration-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792924 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.792988 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793008 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzz7\" (UniqueName: \"kubernetes.io/projected/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-kube-api-access-mlzz7\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793049 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7032c8a7-9079-4063-bc82-a621052567ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793072 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-csi-data-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793100 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c733e876-d72a-4f1c-be58-d529f9807e61-metrics-tls\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793140 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793157 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6cd\" (UniqueName: \"kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793174 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpj9l\" (UniqueName: \"kubernetes.io/projected/65d79adb-6464-4157-924d-ffadb4ed5d16-kube-api-access-gpj9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793190 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404d67ad-059b-4858-a767-2716ca48dfbc-serving-cert\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793217 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-mountpoint-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793235 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793251 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793437 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793477 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7032c8a7-9079-4063-bc82-a621052567ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793499 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793535 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvpm\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-kube-api-access-rfvpm\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793562 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793612 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793629 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793648 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe23c111-8e9a-4456-8973-6aa7a78c52e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793707 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-client\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793725 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eced78-200a-47c3-b4d3-ea5be6867022-config\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793750 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-socket-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793787 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-serving-cert\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793823 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eced78-200a-47c3-b4d3-ea5be6867022-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793853 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-cabundle\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793903 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-apiservice-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvjrr\" (UniqueName: \"kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793939 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404d67ad-059b-4858-a767-2716ca48dfbc-config\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.793959 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794601 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794741 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794676 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794806 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbvt\" (UniqueName: \"kubernetes.io/projected/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-kube-api-access-4hbvt\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794836 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dff44ba1-22c6-42ed-9fc2-984240a9515e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794858 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-key\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794880 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794898 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c733e876-d72a-4f1c-be58-d529f9807e61-trusted-ca\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794918 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794941 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hww\" (UniqueName: \"kubernetes.io/projected/f6347e55-b670-46f1-aff6-f30e10c492f4-kube-api-access-v5hww\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.794970 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zlh\" (UniqueName: \"kubernetes.io/projected/cc2e0e26-cabe-4cca-97f5-941022cff800-kube-api-access-w9zlh\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795013 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthr6\" (UniqueName: \"kubernetes.io/projected/eaabb754-7520-4b59-97bf-2d7ae577191c-kube-api-access-fthr6\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795036 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795059 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dff44ba1-22c6-42ed-9fc2-984240a9515e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795102 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrz7\" (UniqueName: \"kubernetes.io/projected/18973342-45ec-44b6-8456-c813d08240a3-kube-api-access-csrz7\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795123 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795140 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-encryption-config\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795161 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9gw\" (UniqueName: \"kubernetes.io/projected/fe23c111-8e9a-4456-8973-6aa7a78c52e6-kube-api-access-bf9gw\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795194 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795214 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bf7\" (UniqueName: \"kubernetes.io/projected/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-kube-api-access-t7bf7\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795235 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7032c8a7-9079-4063-bc82-a621052567ba-config\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795256 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-config\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795275 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-trusted-ca\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795294 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-srv-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795358 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795378 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6347e55-b670-46f1-aff6-f30e10c492f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795398 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-serving-cert\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795428 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-serving-cert\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795449 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-default-certificate\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795466 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795481 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-dir\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795509 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eced78-200a-47c3-b4d3-ea5be6867022-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795565 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795603 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-srv-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795629 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795650 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795693 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplm8\" (UniqueName: \"kubernetes.io/projected/3a8b421e-f755-4bc6-89f7-03aa4a309a87-kube-api-access-hplm8\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795718 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xnn\" (UniqueName: \"kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795743 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s88f9\" (UniqueName: \"kubernetes.io/projected/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-kube-api-access-s88f9\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795775 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ppd\" (UniqueName: \"kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795799 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9c4\" (UniqueName: \"kubernetes.io/projected/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-kube-api-access-hh9c4\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795861 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795882 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795904 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-486dk\" (UniqueName: \"kubernetes.io/projected/10c61de8-b81d-44ba-a406-d4a5c453a464-kube-api-access-486dk\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795923 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795961 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.795987 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnvg\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796020 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-policies\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796044 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-config\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796064 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aecf14a-cf97-41d8-b037-58f39a0a19bf-service-ca-bundle\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796089 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb876\" (UniqueName: \"kubernetes.io/projected/a005fba8-0843-41a6-90eb-67a2aa6d0580-kube-api-access-wb876\") pod \"downloads-7954f5f757-rbmml\" (UID: \"a005fba8-0843-41a6-90eb-67a2aa6d0580\") " pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796110 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-client\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796138 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/65d79adb-6464-4157-924d-ffadb4ed5d16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796177 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpg6\" (UniqueName: \"kubernetes.io/projected/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-kube-api-access-8wpg6\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796199 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796222 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796244 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796265 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18973342-45ec-44b6-8456-c813d08240a3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796288 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxsfk\" (UniqueName: \"kubernetes.io/projected/69ebaa33-5170-43dd-b2fb-9d77f487c938-kube-api-access-bxsfk\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7f4k\" (UniqueName: \"kubernetes.io/projected/5a7680b4-5a24-4bda-a8af-6d4d3949b969-kube-api-access-n7f4k\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796339 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-stats-auth\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796363 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-metrics-certs\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796411 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796431 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796469 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796496 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6347e55-b670-46f1-aff6-f30e10c492f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796518 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2z2\" (UniqueName: \"kubernetes.io/projected/404d67ad-059b-4858-a767-2716ca48dfbc-kube-api-access-jg2z2\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796544 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-service-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796545 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-config\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796574 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7032c8a7-9079-4063-bc82-a621052567ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.796880 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eced78-200a-47c3-b4d3-ea5be6867022-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:32 crc kubenswrapper[4896]: E0126 15:36:32.797085 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.297069888 +0000 UTC m=+151.078950291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.798271 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-trusted-ca\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.798373 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.798699 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-client\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.798978 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-etcd-service-ca\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.799357 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.799427 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.799461 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7032c8a7-9079-4063-bc82-a621052567ba-config\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.799915 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a7680b4-5a24-4bda-a8af-6d4d3949b969-config\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.801224 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.801790 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7680b4-5a24-4bda-a8af-6d4d3949b969-serving-cert\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.801843 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.806311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-serving-cert\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.820777 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.858641 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.863232 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.897172 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:32 crc kubenswrapper[4896]: E0126 15:36:32.897833 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.397746123 +0000 UTC m=+151.179626526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901144 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18973342-45ec-44b6-8456-c813d08240a3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901212 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901250 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxsfk\" (UniqueName: \"kubernetes.io/projected/69ebaa33-5170-43dd-b2fb-9d77f487c938-kube-api-access-bxsfk\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901289 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-stats-auth\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901320 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-metrics-certs\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901352 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901398 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901435 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6347e55-b670-46f1-aff6-f30e10c492f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg2z2\" (UniqueName: \"kubernetes.io/projected/404d67ad-059b-4858-a767-2716ca48dfbc-kube-api-access-jg2z2\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901501 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901544 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901600 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-webhook-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901646 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901680 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901717 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.901751 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p2c\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-kube-api-access-m7p2c\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.903462 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.903485 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.903628 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.904143 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-tmpfs\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.908899 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6g2\" (UniqueName: \"kubernetes.io/projected/5aecf14a-cf97-41d8-b037-58f39a0a19bf-kube-api-access-kn6g2\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.908942 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8s6\" (UniqueName: \"kubernetes.io/projected/7ce1a770-33b9-4639-bc56-b93bf8627884-kube-api-access-fd8s6\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.905412 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.908979 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-plugins-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.904543 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-tmpfs\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909095 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909131 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909151 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-registration-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909186 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzz7\" (UniqueName: \"kubernetes.io/projected/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-kube-api-access-mlzz7\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909203 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909218 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-csi-data-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909239 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c733e876-d72a-4f1c-be58-d529f9807e61-metrics-tls\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909265 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909273 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-plugins-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909288 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr6cd\" (UniqueName: \"kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909331 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpj9l\" (UniqueName: \"kubernetes.io/projected/65d79adb-6464-4157-924d-ffadb4ed5d16-kube-api-access-gpj9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909341 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-registration-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909361 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404d67ad-059b-4858-a767-2716ca48dfbc-serving-cert\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-mountpoint-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909437 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909441 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-csi-data-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909480 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-mountpoint-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909516 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909545 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.909573 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911149 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911177 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe23c111-8e9a-4456-8973-6aa7a78c52e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911199 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvpm\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-kube-api-access-rfvpm\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911218 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-socket-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911246 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-serving-cert\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-cabundle\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911286 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-apiservice-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911303 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvjrr\" (UniqueName: \"kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911322 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404d67ad-059b-4858-a767-2716ca48dfbc-config\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911337 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911362 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911381 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hbvt\" (UniqueName: \"kubernetes.io/projected/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-kube-api-access-4hbvt\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911402 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dff44ba1-22c6-42ed-9fc2-984240a9515e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911417 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-key\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911433 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911452 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c733e876-d72a-4f1c-be58-d529f9807e61-trusted-ca\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911472 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911494 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hww\" (UniqueName: \"kubernetes.io/projected/f6347e55-b670-46f1-aff6-f30e10c492f4-kube-api-access-v5hww\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911510 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zlh\" (UniqueName: \"kubernetes.io/projected/cc2e0e26-cabe-4cca-97f5-941022cff800-kube-api-access-w9zlh\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911525 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthr6\" (UniqueName: \"kubernetes.io/projected/eaabb754-7520-4b59-97bf-2d7ae577191c-kube-api-access-fthr6\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911532 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911542 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911607 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dff44ba1-22c6-42ed-9fc2-984240a9515e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911629 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-socket-dir\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911645 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csrz7\" (UniqueName: \"kubernetes.io/projected/18973342-45ec-44b6-8456-c813d08240a3-kube-api-access-csrz7\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911669 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911687 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-encryption-config\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911708 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9gw\" (UniqueName: \"kubernetes.io/projected/fe23c111-8e9a-4456-8973-6aa7a78c52e6-kube-api-access-bf9gw\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911737 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7bf7\" (UniqueName: \"kubernetes.io/projected/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-kube-api-access-t7bf7\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911768 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-srv-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911790 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911821 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911847 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6347e55-b670-46f1-aff6-f30e10c492f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911887 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-default-certificate\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911908 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911913 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911928 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-dir\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911966 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-srv-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.911988 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912008 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912029 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplm8\" (UniqueName: \"kubernetes.io/projected/3a8b421e-f755-4bc6-89f7-03aa4a309a87-kube-api-access-hplm8\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912047 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xnn\" (UniqueName: \"kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912071 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9c4\" (UniqueName: \"kubernetes.io/projected/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-kube-api-access-hh9c4\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912095 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ppd\" (UniqueName: \"kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912123 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912143 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912162 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912180 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-486dk\" (UniqueName: \"kubernetes.io/projected/10c61de8-b81d-44ba-a406-d4a5c453a464-kube-api-access-486dk\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912199 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912275 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912363 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-policies\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912385 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aecf14a-cf97-41d8-b037-58f39a0a19bf-service-ca-bundle\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912430 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-client\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912452 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/65d79adb-6464-4157-924d-ffadb4ed5d16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912501 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpg6\" (UniqueName: \"kubernetes.io/projected/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-kube-api-access-8wpg6\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912519 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:32 crc kubenswrapper[4896]: E0126 15:36:32.912858 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.412843836 +0000 UTC m=+151.194724229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.912974 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-dir\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.914018 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c733e876-d72a-4f1c-be58-d529f9807e61-metrics-tls\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.914766 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-audit-policies\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.915002 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.915245 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.915399 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dff44ba1-22c6-42ed-9fc2-984240a9515e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.915489 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c733e876-d72a-4f1c-be58-d529f9807e61-trusted-ca\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.916026 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-encryption-config\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.917149 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-serving-cert\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.917627 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-etcd-client\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.918188 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dff44ba1-22c6-42ed-9fc2-984240a9515e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.918994 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.919567 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.919624 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.920109 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.920752 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.920775 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.921979 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.922166 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.922449 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.939911 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.960425 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.967927 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/65d79adb-6464-4157-924d-ffadb4ed5d16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.983968 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 15:36:32 crc kubenswrapper[4896]: I0126 15:36:32.997069 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-srv-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.000766 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.013885 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.014020 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.513994075 +0000 UTC m=+151.295874458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.014559 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.015050 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.515025691 +0000 UTC m=+151.296906084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.020299 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.025620 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-profile-collector-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.027290 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.032779 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a8b421e-f755-4bc6-89f7-03aa4a309a87-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.039958 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.060391 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.080293 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.085133 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.100868 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.108526 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.116367 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.116518 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.616498518 +0000 UTC m=+151.398378921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.116669 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.117262 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.617243207 +0000 UTC m=+151.399123640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.120666 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.144868 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.146298 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.160636 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.181232 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.200497 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.218787 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.219817 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.719796233 +0000 UTC m=+151.501676626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.220681 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.241194 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.244748 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe23c111-8e9a-4456-8973-6aa7a78c52e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.261024 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.281109 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.300314 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.306299 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18973342-45ec-44b6-8456-c813d08240a3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.320775 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.321092 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.321716 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.82169914 +0000 UTC m=+151.603579533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.340106 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.350090 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-metrics-certs\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.360105 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.381055 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.385477 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-default-certificate\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.400596 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.406615 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5aecf14a-cf97-41d8-b037-58f39a0a19bf-stats-auth\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.419280 4896 request.go:700] Waited for 1.002871483s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0 Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.421382 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.422045 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.422253 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.922226641 +0000 UTC m=+151.704107034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.422652 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.423083 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:33.923074194 +0000 UTC m=+151.704954587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.425148 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aecf14a-cf97-41d8-b037-58f39a0a19bf-service-ca-bundle\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.440615 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.461540 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.465842 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-apiservice-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.468520 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-webhook-cert\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.481691 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.501355 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.506467 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6347e55-b670-46f1-aff6-f30e10c492f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.522146 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.523652 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.523824 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.02379811 +0000 UTC m=+151.805678503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.523940 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6347e55-b670-46f1-aff6-f30e10c492f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.524336 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.524639 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.024625201 +0000 UTC m=+151.806505594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.540521 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.560979 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.581963 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.586665 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/69ebaa33-5170-43dd-b2fb-9d77f487c938-srv-cert\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.601791 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.621912 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.624314 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.626549 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.626687 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.126659793 +0000 UTC m=+151.908540186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.626998 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.627344 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.127334031 +0000 UTC m=+151.909214424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.640457 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.652481 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.660041 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.680208 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.684145 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404d67ad-059b-4858-a767-2716ca48dfbc-config\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.701217 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.715015 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404d67ad-059b-4858-a767-2716ca48dfbc-serving-cert\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.720929 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.728968 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.729196 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.229150556 +0000 UTC m=+152.011030949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.729602 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.729965 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.229956978 +0000 UTC m=+152.011837371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.741252 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.760975 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.781041 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.801353 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.806558 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.831676 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.831975 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.331905846 +0000 UTC m=+152.113786309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.833324 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.833701 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.333683924 +0000 UTC m=+152.115564317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.835074 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.840541 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.841813 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.861378 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.881836 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.885210 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-cabundle\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.900475 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.903665 4896 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.903738 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token podName:cc2e0e26-cabe-4cca-97f5-941022cff800 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.403718419 +0000 UTC m=+152.185598812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token") pod "machine-config-server-ld6jq" (UID: "cc2e0e26-cabe-4cca-97f5-941022cff800") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.908285 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/10c61de8-b81d-44ba-a406-d4a5c453a464-signing-key\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.909350 4896 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.909419 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume podName:eaabb754-7520-4b59-97bf-2d7ae577191c nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.409399561 +0000 UTC m=+152.191279954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume") pod "dns-default-7n4vw" (UID: "eaabb754-7520-4b59-97bf-2d7ae577191c") : failed to sync configmap cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.913487 4896 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.913532 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert podName:7ce1a770-33b9-4639-bc56-b93bf8627884 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.413521971 +0000 UTC m=+152.195402364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert") pod "ingress-canary-n6zp9" (UID: "7ce1a770-33b9-4639-bc56-b93bf8627884") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.914985 4896 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.915045 4896 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.915086 4896 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.915105 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls podName:eaabb754-7520-4b59-97bf-2d7ae577191c nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.415092073 +0000 UTC m=+152.196972466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls") pod "dns-default-7n4vw" (UID: "eaabb754-7520-4b59-97bf-2d7ae577191c") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.915123 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls podName:f6d49f0f-270b-4a18-827f-e3cb5a9fb202 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.415116573 +0000 UTC m=+152.196996966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls") pod "machine-config-controller-84d6567774-wg2jv" (UID: "f6d49f0f-270b-4a18-827f-e3cb5a9fb202") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.915159 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs podName:cc2e0e26-cabe-4cca-97f5-941022cff800 nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.415143414 +0000 UTC m=+152.197023807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs") pod "machine-config-server-ld6jq" (UID: "cc2e0e26-cabe-4cca-97f5-941022cff800") : failed to sync secret cache: timed out waiting for the condition Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.919704 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.934493 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.935165 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.435116749 +0000 UTC m=+152.216997142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.935791 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:33 crc kubenswrapper[4896]: E0126 15:36:33.936148 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.436132886 +0000 UTC m=+152.218013279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.940712 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.959952 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 15:36:33 crc kubenswrapper[4896]: I0126 15:36:33.980633 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.016281 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtlbr\" (UniqueName: \"kubernetes.io/projected/0ae7fad5-49a0-4745-9f42-9e47aa5614b7-kube-api-access-gtlbr\") pod \"machine-approver-56656f9798-gx9b8\" (UID: \"0ae7fad5-49a0-4745-9f42-9e47aa5614b7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.036755 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.036860 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.536835342 +0000 UTC m=+152.318715745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.037621 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.037883 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.53787518 +0000 UTC m=+152.319755573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.055027 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x224r\" (UniqueName: \"kubernetes.io/projected/7df72ba3-ab2b-4500-a44f-cc77c9771c33-kube-api-access-x224r\") pod \"openshift-apiserver-operator-796bbdcf4f-ckpmm\" (UID: \"7df72ba3-ab2b-4500-a44f-cc77c9771c33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.056316 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmp7\" (UniqueName: \"kubernetes.io/projected/0752f58c-f532-48fb-b192-30c2f8614059-kube-api-access-rcmp7\") pod \"machine-api-operator-5694c8668f-66hlb\" (UID: \"0752f58c-f532-48fb-b192-30c2f8614059\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.069735 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.073305 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29vl8\" (UniqueName: \"kubernetes.io/projected/551d129e-dcc5-4e55-89d1-68607191e923-kube-api-access-29vl8\") pod \"authentication-operator-69f744f599-h47xb\" (UID: \"551d129e-dcc5-4e55-89d1-68607191e923\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.096982 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkp7\" (UniqueName: \"kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7\") pod \"controller-manager-879f6c89f-5nvtk\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.120930 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.121100 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdl9\" (UniqueName: \"kubernetes.io/projected/8757f648-a97f-4590-a332-6ee3bb30fa52-kube-api-access-vsdl9\") pod \"openshift-controller-manager-operator-756b6f6bc6-czxnh\" (UID: \"8757f648-a97f-4590-a332-6ee3bb30fa52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.139027 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.139714 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.639699785 +0000 UTC m=+152.421580168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.140092 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.160188 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.179939 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.189616 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.209539 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.217314 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp57h\" (UniqueName: \"kubernetes.io/projected/1399e6aa-14ba-40e6-aec2-b3268e5b7102-kube-api-access-wp57h\") pod \"apiserver-76f77b778f-gb6wx\" (UID: \"1399e6aa-14ba-40e6-aec2-b3268e5b7102\") " pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.236863 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k82nh\" (UniqueName: \"kubernetes.io/projected/930ffea7-a937-407b-ae73-9b22885a6aad-kube-api-access-k82nh\") pod \"openshift-config-operator-7777fb866f-5tvfv\" (UID: \"930ffea7-a937-407b-ae73-9b22885a6aad\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.240332 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.240835 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.740824152 +0000 UTC m=+152.522704545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.263632 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.303272 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.306689 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.332741 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.342976 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.842938465 +0000 UTC m=+152.624818868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.342704 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.343350 4896 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.343515 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.343925 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.84390846 +0000 UTC m=+152.625788853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.374488 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.381009 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.386969 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjrh\" (UniqueName: \"kubernetes.io/projected/f98717ba-8501-4233-9959-efbd73aabbd9-kube-api-access-bhjrh\") pod \"migrator-59844c95c7-bnntw\" (UID: \"f98717ba-8501-4233-9959-efbd73aabbd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.390401 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2zl\" (UniqueName: \"kubernetes.io/projected/51353d35-53fc-4769-9870-6598a2df021c-kube-api-access-7m2zl\") pod \"machine-config-operator-74547568cd-ln65f\" (UID: \"51353d35-53fc-4769-9870-6598a2df021c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.398026 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v5hj\" (UniqueName: \"kubernetes.io/projected/4cb0cc2a-b5c6-4599-bfea-59703789fb7b-kube-api-access-5v5hj\") pod \"dns-operator-744455d44c-p8n8h\" (UID: \"4cb0cc2a-b5c6-4599-bfea-59703789fb7b\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.399800 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrj6\" (UniqueName: \"kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6\") pod \"console-f9d7485db-z6479\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.400799 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.403273 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6793088e-6a2c-4abf-be95-b686e3904d1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h4hvh\" (UID: \"6793088e-6a2c-4abf-be95-b686e3904d1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.424925 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.429715 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.438987 4896 request.go:700] Waited for 1.827748861s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.440882 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.444699 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.444982 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.445020 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.445075 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.445156 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.445243 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.445368 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.446039 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:34.946019253 +0000 UTC m=+152.727899656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.469186 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-certs\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.584018 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.584548 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.585530 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.585707 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.586501 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.086475533 +0000 UTC m=+152.868355916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.586558 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cc2e0e26-cabe-4cca-97f5-941022cff800-node-bootstrap-token\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.587280 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.588067 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-proxy-tls\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.591616 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ce1a770-33b9-4639-bc56-b93bf8627884-cert\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.607193 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7032c8a7-9079-4063-bc82-a621052567ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j67k8\" (UID: \"7032c8a7-9079-4063-bc82-a621052567ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.607341 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.608742 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.608875 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.609266 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.609522 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaabb754-7520-4b59-97bf-2d7ae577191c-config-volume\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.613668 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.614148 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaabb754-7520-4b59-97bf-2d7ae577191c-metrics-tls\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.616290 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb876\" (UniqueName: \"kubernetes.io/projected/a005fba8-0843-41a6-90eb-67a2aa6d0580-kube-api-access-wb876\") pod \"downloads-7954f5f757-rbmml\" (UID: \"a005fba8-0843-41a6-90eb-67a2aa6d0580\") " pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.616384 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" event={"ID":"0ae7fad5-49a0-4745-9f42-9e47aa5614b7","Type":"ContainerStarted","Data":"2da99ed028f3663da9816aec375c40f6abc6e7162a1b42d10bde7f38f44d035d"} Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.625283 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7f4k\" (UniqueName: \"kubernetes.io/projected/5a7680b4-5a24-4bda-a8af-6d4d3949b969-kube-api-access-n7f4k\") pod \"etcd-operator-b45778765-6ldjd\" (UID: \"5a7680b4-5a24-4bda-a8af-6d4d3949b969\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.625567 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnvg\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.691901 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.692106 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.19208024 +0000 UTC m=+152.973960633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.692396 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.692764 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.192747138 +0000 UTC m=+152.974627531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.695017 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s88f9\" (UniqueName: \"kubernetes.io/projected/de631e52-dcb0-49c6-8bd9-df4085c7ffcc-kube-api-access-s88f9\") pod \"console-operator-58897d9998-rhdrn\" (UID: \"de631e52-dcb0-49c6-8bd9-df4085c7ffcc\") " pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.704544 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxsfk\" (UniqueName: \"kubernetes.io/projected/69ebaa33-5170-43dd-b2fb-9d77f487c938-kube-api-access-bxsfk\") pod \"catalog-operator-68c6474976-cdvp4\" (UID: \"69ebaa33-5170-43dd-b2fb-9d77f487c938\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.708724 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eced78-200a-47c3-b4d3-ea5be6867022-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2p7md\" (UID: \"f2eced78-200a-47c3-b4d3-ea5be6867022\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.717258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg2z2\" (UniqueName: \"kubernetes.io/projected/404d67ad-059b-4858-a767-2716ca48dfbc-kube-api-access-jg2z2\") pod \"service-ca-operator-777779d784-w474z\" (UID: \"404d67ad-059b-4858-a767-2716ca48dfbc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.730087 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.735021 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.738883 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.793026 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.793997 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.293981297 +0000 UTC m=+153.075861690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.905593 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:34 crc kubenswrapper[4896]: E0126 15:36:34.905956 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.405945764 +0000 UTC m=+153.187826157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.906289 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.907264 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.907731 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.907974 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.908250 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" Jan 26 15:36:34 crc kubenswrapper[4896]: I0126 15:36:34.964447 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6g2\" (UniqueName: \"kubernetes.io/projected/5aecf14a-cf97-41d8-b037-58f39a0a19bf-kube-api-access-kn6g2\") pod \"router-default-5444994796-ms78m\" (UID: \"5aecf14a-cf97-41d8-b037-58f39a0a19bf\") " pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.005026 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplm8\" (UniqueName: \"kubernetes.io/projected/3a8b421e-f755-4bc6-89f7-03aa4a309a87-kube-api-access-hplm8\") pod \"olm-operator-6b444d44fb-lw2tr\" (UID: \"3a8b421e-f755-4bc6-89f7-03aa4a309a87\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.010167 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.010867 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.510842931 +0000 UTC m=+153.292723324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.017081 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9gw\" (UniqueName: \"kubernetes.io/projected/fe23c111-8e9a-4456-8973-6aa7a78c52e6-kube-api-access-bf9gw\") pod \"cluster-samples-operator-665b6dd947-w6jkc\" (UID: \"fe23c111-8e9a-4456-8973-6aa7a78c52e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.020498 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7bf7\" (UniqueName: \"kubernetes.io/projected/c0c7df3f-b79d-4bd5-b67d-79f785c8ffee-kube-api-access-t7bf7\") pod \"packageserver-d55dfcdfc-x7ndr\" (UID: \"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.020718 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xnn\" (UniqueName: \"kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn\") pod \"oauth-openshift-558db77b4-k45bj\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.049567 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9c4\" (UniqueName: \"kubernetes.io/projected/61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5-kube-api-access-hh9c4\") pod \"package-server-manager-789f6589d5-5fzf2\" (UID: \"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.051456 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzz7\" (UniqueName: \"kubernetes.io/projected/2496d14f-9aa6-4ee7-9db9-5bca63fa5a54-kube-api-access-mlzz7\") pod \"csi-hostpathplugin-bhztt\" (UID: \"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54\") " pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.052540 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8s6\" (UniqueName: \"kubernetes.io/projected/7ce1a770-33b9-4639-bc56-b93bf8627884-kube-api-access-fd8s6\") pod \"ingress-canary-n6zp9\" (UID: \"7ce1a770-33b9-4639-bc56-b93bf8627884\") " pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.054722 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ppd\" (UniqueName: \"kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd\") pod \"marketplace-operator-79b997595-p79qr\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.056122 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvpm\" (UniqueName: \"kubernetes.io/projected/c733e876-d72a-4f1c-be58-d529f9807e61-kube-api-access-rfvpm\") pod \"ingress-operator-5b745b69d9-wwnjn\" (UID: \"c733e876-d72a-4f1c-be58-d529f9807e61\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.056432 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.057153 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvjrr\" (UniqueName: \"kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr\") pod \"collect-profiles-29490690-2zp5m\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.057381 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr6cd\" (UniqueName: \"kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd\") pod \"route-controller-manager-6576b87f9c-772nw\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.065352 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpj9l\" (UniqueName: \"kubernetes.io/projected/65d79adb-6464-4157-924d-ffadb4ed5d16-kube-api-access-gpj9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gnv9\" (UID: \"65d79adb-6464-4157-924d-ffadb4ed5d16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.090445 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csrz7\" (UniqueName: \"kubernetes.io/projected/18973342-45ec-44b6-8456-c813d08240a3-kube-api-access-csrz7\") pod \"multus-admission-controller-857f4d67dd-sw5mq\" (UID: \"18973342-45ec-44b6-8456-c813d08240a3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.090740 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.104148 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.106095 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.111436 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.111858 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.611840864 +0000 UTC m=+153.393721257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.113860 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-486dk\" (UniqueName: \"kubernetes.io/projected/10c61de8-b81d-44ba-a406-d4a5c453a464-kube-api-access-486dk\") pod \"service-ca-9c57cc56f-gl252\" (UID: \"10c61de8-b81d-44ba-a406-d4a5c453a464\") " pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.130398 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.130428 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.133845 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.134350 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthr6\" (UniqueName: \"kubernetes.io/projected/eaabb754-7520-4b59-97bf-2d7ae577191c-kube-api-access-fthr6\") pod \"dns-default-7n4vw\" (UID: \"eaabb754-7520-4b59-97bf-2d7ae577191c\") " pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.134472 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.142618 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.143354 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p2c\" (UniqueName: \"kubernetes.io/projected/dff44ba1-22c6-42ed-9fc2-984240a9515e-kube-api-access-m7p2c\") pod \"cluster-image-registry-operator-dc59b4c8b-5wn9k\" (UID: \"dff44ba1-22c6-42ed-9fc2-984240a9515e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.145284 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zlh\" (UniqueName: \"kubernetes.io/projected/cc2e0e26-cabe-4cca-97f5-941022cff800-kube-api-access-w9zlh\") pod \"machine-config-server-ld6jq\" (UID: \"cc2e0e26-cabe-4cca-97f5-941022cff800\") " pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.153446 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hww\" (UniqueName: \"kubernetes.io/projected/f6347e55-b670-46f1-aff6-f30e10c492f4-kube-api-access-v5hww\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk9lz\" (UID: \"f6347e55-b670-46f1-aff6-f30e10c492f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.168527 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.172037 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hbvt\" (UniqueName: \"kubernetes.io/projected/f6d49f0f-270b-4a18-827f-e3cb5a9fb202-kube-api-access-4hbvt\") pod \"machine-config-controller-84d6567774-wg2jv\" (UID: \"f6d49f0f-270b-4a18-827f-e3cb5a9fb202\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.175924 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.177996 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpg6\" (UniqueName: \"kubernetes.io/projected/317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0-kube-api-access-8wpg6\") pod \"apiserver-7bbb656c7d-hwqlq\" (UID: \"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.192635 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.212286 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.212567 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.712543099 +0000 UTC m=+153.494423492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.212767 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.213138 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.713127155 +0000 UTC m=+153.495007548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.224339 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gl252" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.258152 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.266949 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n6zp9" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.291957 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.302119 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ld6jq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.315402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.315551 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.815524465 +0000 UTC m=+153.597404858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.315705 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.316033 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.816020899 +0000 UTC m=+153.597901292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.322394 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.346096 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.365331 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.373832 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.417549 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.419346 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:35.919328284 +0000 UTC m=+153.701208677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.452292 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.525739 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.526048 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.02602892 +0000 UTC m=+153.807909313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.629573 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.629884 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.129859629 +0000 UTC m=+153.911740022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.638616 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" event={"ID":"0ae7fad5-49a0-4745-9f42-9e47aa5614b7","Type":"ContainerStarted","Data":"0ef793a6e055db7c9e1867395b89a57efdb43d9f41be8e405bcdcb4a9b721efd"} Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.647660 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ms78m" event={"ID":"5aecf14a-cf97-41d8-b037-58f39a0a19bf","Type":"ContainerStarted","Data":"251c46962f6b3f9908f6cfdf2f6d4344e8b403ecb95338aa3e906d6d148b8bd9"} Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.648633 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ld6jq" event={"ID":"cc2e0e26-cabe-4cca-97f5-941022cff800","Type":"ContainerStarted","Data":"0caca6c7a96f416d0d369381e6260dc552215e478a4cb054e819301e598910c2"} Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.730903 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.731298 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.231281303 +0000 UTC m=+154.013161706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.831631 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.832060 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.332044771 +0000 UTC m=+154.113925164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:35 crc kubenswrapper[4896]: I0126 15:36:35.933105 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:35 crc kubenswrapper[4896]: E0126 15:36:35.933464 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.433449645 +0000 UTC m=+154.215330038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.033801 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.034153 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.53412298 +0000 UTC m=+154.316003383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.136277 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.136651 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.636637064 +0000 UTC m=+154.418517457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.237108 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.237342 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.737308448 +0000 UTC m=+154.519188851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.237511 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.237921 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.737911544 +0000 UTC m=+154.519791937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.407024 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.407300 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.907278117 +0000 UTC m=+154.689158510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.407393 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.407977 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:36.907963126 +0000 UTC m=+154.689843519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.508761 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.509153 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.009133624 +0000 UTC m=+154.791014017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.639359 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.639724 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.139707848 +0000 UTC m=+154.921588261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.769876 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ld6jq" event={"ID":"cc2e0e26-cabe-4cca-97f5-941022cff800","Type":"ContainerStarted","Data":"e0411cf00be95d211b04472fe33af043f4fe324e9b435e39174c49755ac228b6"} Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.787221 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.787843 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.287816973 +0000 UTC m=+155.069697366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.788190 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.788762 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.288732978 +0000 UTC m=+155.070613371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.889467 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.890416 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.390385338 +0000 UTC m=+155.172265721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.891606 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" event={"ID":"0ae7fad5-49a0-4745-9f42-9e47aa5614b7","Type":"ContainerStarted","Data":"989daa88141f904570d19da46684751876583544b9a0debf0f15cbe9c0d20004"} Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.891650 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ms78m" event={"ID":"5aecf14a-cf97-41d8-b037-58f39a0a19bf","Type":"ContainerStarted","Data":"4c7a38e72ce4d1c179a7df8327a67fdd476f3e743f656b928697cac90fac6b47"} Jan 26 15:36:36 crc kubenswrapper[4896]: I0126 15:36:36.991724 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:36 crc kubenswrapper[4896]: E0126 15:36:36.993378 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.493362645 +0000 UTC m=+155.275243028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.082677 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" podStartSLOduration=133.082658815 podStartE2EDuration="2m13.082658815s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:37.079483899 +0000 UTC m=+154.861364292" watchObservedRunningTime="2026-01-26 15:36:37.082658815 +0000 UTC m=+154.864539218" Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.095421 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.095752 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.595732935 +0000 UTC m=+155.377613328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.103383 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ld6jq" podStartSLOduration=5.103366959 podStartE2EDuration="5.103366959s" podCreationTimestamp="2026-01-26 15:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:37.100358268 +0000 UTC m=+154.882238671" watchObservedRunningTime="2026-01-26 15:36:37.103366959 +0000 UTC m=+154.885247352" Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.132402 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ms78m" podStartSLOduration=132.132372255 podStartE2EDuration="2m12.132372255s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:37.132192691 +0000 UTC m=+154.914073174" watchObservedRunningTime="2026-01-26 15:36:37.132372255 +0000 UTC m=+154.914252648" Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.136684 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.144947 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:37 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:37 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:37 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.144994 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.196876 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.197250 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.697236581 +0000 UTC m=+155.479116984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.302198 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.302552 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.80253509 +0000 UTC m=+155.584415483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.356396 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8n8h"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.371207 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.376884 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h47xb"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.378750 4896 csr.go:261] certificate signing request csr-r4vqn is approved, waiting to be issued Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.384971 4896 csr.go:257] certificate signing request csr-r4vqn is issued Jan 26 15:36:37 crc kubenswrapper[4896]: W0126 15:36:37.399261 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cb0cc2a_b5c6_4599_bfea_59703789fb7b.slice/crio-92a9104c4fef1455c92d85a3ed61ceb4a3d8c12fc49be2a23bae996938aaf266 WatchSource:0}: Error finding container 92a9104c4fef1455c92d85a3ed61ceb4a3d8c12fc49be2a23bae996938aaf266: Status 404 returned error can't find the container with id 92a9104c4fef1455c92d85a3ed61ceb4a3d8c12fc49be2a23bae996938aaf266 Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.404141 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.404607 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:37.904569601 +0000 UTC m=+155.686449994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.429014 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-66hlb"] Jan 26 15:36:37 crc kubenswrapper[4896]: W0126 15:36:37.452406 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0752f58c_f532_48fb_b192_30c2f8614059.slice/crio-41db1629ba2ba43d025129bd7eb312d7be61147b2e60e45b2d8d0134cf79f8de WatchSource:0}: Error finding container 41db1629ba2ba43d025129bd7eb312d7be61147b2e60e45b2d8d0134cf79f8de: Status 404 returned error can't find the container with id 41db1629ba2ba43d025129bd7eb312d7be61147b2e60e45b2d8d0134cf79f8de Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.488864 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.505033 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.505266 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.005242775 +0000 UTC m=+155.787123168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.505310 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.505695 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.005685588 +0000 UTC m=+155.787565981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.520607 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm"] Jan 26 15:36:37 crc kubenswrapper[4896]: W0126 15:36:37.521782 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09601473_06d9_4938_876d_ea6e1b9ffc91.slice/crio-3b3b9071820a43c5fc8026b13cf30b8c5c5e36a98bff55d6071d1d2eb574b967 WatchSource:0}: Error finding container 3b3b9071820a43c5fc8026b13cf30b8c5c5e36a98bff55d6071d1d2eb574b967: Status 404 returned error can't find the container with id 3b3b9071820a43c5fc8026b13cf30b8c5c5e36a98bff55d6071d1d2eb574b967 Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.522061 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.606264 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.606637 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.106610379 +0000 UTC m=+155.888490802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.645877 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.657038 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.660879 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6ldjd"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.671689 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.678968 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gb6wx"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.705527 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.707758 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.708177 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.208162957 +0000 UTC m=+155.990043350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.738646 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sw5mq"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.745651 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.747780 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w474z"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.750498 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bhztt"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.767355 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.772690 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.784250 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.787044 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rbmml"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.788690 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rhdrn"] Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.798574 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" event={"ID":"0d634623-470e-42c0-b550-fac7a770530d","Type":"ContainerStarted","Data":"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.798630 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" event={"ID":"0d634623-470e-42c0-b550-fac7a770530d","Type":"ContainerStarted","Data":"f37da726568c42ca632af6d32d9e3d9611b9023722816b2e8519ce2aee1e843b"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.799481 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" event={"ID":"8757f648-a97f-4590-a332-6ee3bb30fa52","Type":"ContainerStarted","Data":"9022694762d9d9035c3eb910224cbba2e258dbbdbe71b88a695587fe9e7a8c79"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.800270 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" event={"ID":"4cb0cc2a-b5c6-4599-bfea-59703789fb7b","Type":"ContainerStarted","Data":"92a9104c4fef1455c92d85a3ed61ceb4a3d8c12fc49be2a23bae996938aaf266"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.801188 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z6479" event={"ID":"09601473-06d9-4938-876d-ea6e1b9ffc91","Type":"ContainerStarted","Data":"3b3b9071820a43c5fc8026b13cf30b8c5c5e36a98bff55d6071d1d2eb574b967"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.802389 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" event={"ID":"551d129e-dcc5-4e55-89d1-68607191e923","Type":"ContainerStarted","Data":"917f389d52d50e93438a94b94c4efd9d55e8b2b18502e17df1e4bf96dc0f6e29"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.802426 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" event={"ID":"551d129e-dcc5-4e55-89d1-68607191e923","Type":"ContainerStarted","Data":"53dc41a641d83f5b0eda938e3835b52eadbcbdad0d4b7e0d7d1752ca35571149"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.803347 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" event={"ID":"7df72ba3-ab2b-4500-a44f-cc77c9771c33","Type":"ContainerStarted","Data":"56b4d35fe507a629dd8e1a5ef91afb63d55653b21121ada924c16f73b733d677"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.804560 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" event={"ID":"0752f58c-f532-48fb-b192-30c2f8614059","Type":"ContainerStarted","Data":"07dc636e16e1f674023f141c98009f360e592b148f585d5d71b6f7718be90598"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.804639 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" event={"ID":"0752f58c-f532-48fb-b192-30c2f8614059","Type":"ContainerStarted","Data":"41db1629ba2ba43d025129bd7eb312d7be61147b2e60e45b2d8d0134cf79f8de"} Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.808755 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.808916 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.308879732 +0000 UTC m=+156.090760125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.809056 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.809378 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.309365916 +0000 UTC m=+156.091246309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.910546 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:37 crc kubenswrapper[4896]: E0126 15:36:37.912746 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.412460575 +0000 UTC m=+156.194341008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:37 crc kubenswrapper[4896]: I0126 15:36:37.933269 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.013054 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.013480 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.513464108 +0000 UTC m=+156.295344501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.039470 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.046521 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.047842 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.053301 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.055184 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.077147 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr"] Jan 26 15:36:38 crc kubenswrapper[4896]: W0126 15:36:38.094097 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a7680b4_5a24_4bda_a8af_6d4d3949b969.slice/crio-0096af4683a6687fa3ff787013cd0c3603769499150876d4801f83365da35dbc WatchSource:0}: Error finding container 0096af4683a6687fa3ff787013cd0c3603769499150876d4801f83365da35dbc: Status 404 returned error can't find the container with id 0096af4683a6687fa3ff787013cd0c3603769499150876d4801f83365da35dbc Jan 26 15:36:38 crc kubenswrapper[4896]: W0126 15:36:38.097821 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf98717ba_8501_4233_9959_efbd73aabbd9.slice/crio-9a056ed853fb189ecb162b5dc7594512f1ecce5e62f5e16590d3423248053f66 WatchSource:0}: Error finding container 9a056ed853fb189ecb162b5dc7594512f1ecce5e62f5e16590d3423248053f66: Status 404 returned error can't find the container with id 9a056ed853fb189ecb162b5dc7594512f1ecce5e62f5e16590d3423248053f66 Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.099463 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.115110 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.115252 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.615230462 +0000 UTC m=+156.397110865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.115286 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.115817 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.615804797 +0000 UTC m=+156.397685190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.116766 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.139987 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:38 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:38 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:38 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.140371 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.170274 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.216165 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.216372 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.716345769 +0000 UTC m=+156.498226162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.217144 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.218250 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.71823139 +0000 UTC m=+156.500111783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.274109 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7n4vw"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.295891 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.314896 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.317906 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.318034 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.8180162 +0000 UTC m=+156.599896593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.318129 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.318499 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.818483912 +0000 UTC m=+156.600364305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.354601 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.379233 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gl252"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.392351 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 15:31:37 +0000 UTC, rotation deadline is 2026-10-13 20:49:26.226778676 +0000 UTC Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.392396 4896 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6245h12m47.834386415s for next certificate rotation Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.402160 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq"] Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.428457 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.429123 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:38.929099433 +0000 UTC m=+156.710979846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.454510 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n6zp9"] Jan 26 15:36:38 crc kubenswrapper[4896]: W0126 15:36:38.492779 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c61de8_b81d_44ba_a406_d4a5c453a464.slice/crio-563b77d473dd1b1937acc5c46b65255790fd0f0fbfb957a84e44fd0f57149606 WatchSource:0}: Error finding container 563b77d473dd1b1937acc5c46b65255790fd0f0fbfb957a84e44fd0f57149606: Status 404 returned error can't find the container with id 563b77d473dd1b1937acc5c46b65255790fd0f0fbfb957a84e44fd0f57149606 Jan 26 15:36:38 crc kubenswrapper[4896]: W0126 15:36:38.494898 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317cbfb5_64c8_49fa_8a4b_7cf84cc61ab0.slice/crio-ba689c818debcbf658bb7028681f9304c08f5c533aac87d9b56877ec6fee1d04 WatchSource:0}: Error finding container ba689c818debcbf658bb7028681f9304c08f5c533aac87d9b56877ec6fee1d04: Status 404 returned error can't find the container with id ba689c818debcbf658bb7028681f9304c08f5c533aac87d9b56877ec6fee1d04 Jan 26 15:36:38 crc kubenswrapper[4896]: W0126 15:36:38.500611 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce1a770_33b9_4639_bc56_b93bf8627884.slice/crio-8a614585d2f670e248c5f9f07b290f4cfbfd39a76e85c6c3d6c103e3c42a860d WatchSource:0}: Error finding container 8a614585d2f670e248c5f9f07b290f4cfbfd39a76e85c6c3d6c103e3c42a860d: Status 404 returned error can't find the container with id 8a614585d2f670e248c5f9f07b290f4cfbfd39a76e85c6c3d6c103e3c42a860d Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.529620 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.530888 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.030869347 +0000 UTC m=+156.812749740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.630442 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.630749 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.13073087 +0000 UTC m=+156.912611263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.731184 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.731667 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.231655141 +0000 UTC m=+157.013535534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.810712 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" event={"ID":"3a8b421e-f755-4bc6-89f7-03aa4a309a87","Type":"ContainerStarted","Data":"a78395613663cabb3f82a542681f7e0669c63eed5044e035c770b5e856a1c607"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.812749 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" event={"ID":"f98717ba-8501-4233-9959-efbd73aabbd9","Type":"ContainerStarted","Data":"9a056ed853fb189ecb162b5dc7594512f1ecce5e62f5e16590d3423248053f66"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.820202 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n6zp9" event={"ID":"7ce1a770-33b9-4639-bc56-b93bf8627884","Type":"ContainerStarted","Data":"8a614585d2f670e248c5f9f07b290f4cfbfd39a76e85c6c3d6c103e3c42a860d"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.832139 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.832370 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.332339816 +0000 UTC m=+157.114220209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.832412 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.832882 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.332867161 +0000 UTC m=+157.114747554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.833871 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" event={"ID":"69ebaa33-5170-43dd-b2fb-9d77f487c938","Type":"ContainerStarted","Data":"090503abaf70ed3204ca8f5b56b8a446b6f27f5308e2f96fe8fca3c0d967b8c7"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.837127 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" event={"ID":"7df72ba3-ab2b-4500-a44f-cc77c9771c33","Type":"ContainerStarted","Data":"56dfd0deb94d71ff5e2f21b8d8a2839fb33e1bd0c9d5775cade52ec72780d497"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.840478 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" event={"ID":"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0","Type":"ContainerStarted","Data":"ba689c818debcbf658bb7028681f9304c08f5c533aac87d9b56877ec6fee1d04"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.843718 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" event={"ID":"930ffea7-a937-407b-ae73-9b22885a6aad","Type":"ContainerStarted","Data":"18c386798aff858d6630842b234f2f16d284b7bb403bb3caaeb547a99f908d98"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.843748 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" event={"ID":"930ffea7-a937-407b-ae73-9b22885a6aad","Type":"ContainerStarted","Data":"d86b79ddd43c0b91827050dbc4e247387c81bf258d2c78ceea2c1162cf7e523f"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.844782 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" event={"ID":"f6d49f0f-270b-4a18-827f-e3cb5a9fb202","Type":"ContainerStarted","Data":"38a902bc26a47d7cc71602f9b84a68f64b98ef170a78f65999a9ee6c2be09ed7"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.845669 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gl252" event={"ID":"10c61de8-b81d-44ba-a406-d4a5c453a464","Type":"ContainerStarted","Data":"563b77d473dd1b1937acc5c46b65255790fd0f0fbfb957a84e44fd0f57149606"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.847795 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" event={"ID":"f6347e55-b670-46f1-aff6-f30e10c492f4","Type":"ContainerStarted","Data":"faf57e1c14e0824dabd35c61159d3c3857a90363221bca77f6059fafbf324826"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.849100 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" event={"ID":"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5","Type":"ContainerStarted","Data":"c312bbd37228f1fa114a5287f204c65f7fbb98e7d262a2ab70b9a1caf75f1136"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.855946 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" event={"ID":"c290f80c-4e19-4618-9ec2-2bc47df395fd","Type":"ContainerStarted","Data":"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.855995 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" event={"ID":"c290f80c-4e19-4618-9ec2-2bc47df395fd","Type":"ContainerStarted","Data":"eae2350dc2dd1bf8c4e00894a01c161d275c9a66f4cf9939a3c4524db0a88fbb"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.859564 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" event={"ID":"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54","Type":"ContainerStarted","Data":"05a3ffc6aa470cf60cef6d92f9b4c7e68031f6eac801051719e85ff0a9edc6fb"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.893469 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" event={"ID":"e1cbe94d-b2c9-4632-8a2b-1066967ed241","Type":"ContainerStarted","Data":"f5d368244ffab2bba7435cd270edc83f51340eb7e9056d8bec0bc9f4ac70272b"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.917778 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" event={"ID":"5a7680b4-5a24-4bda-a8af-6d4d3949b969","Type":"ContainerStarted","Data":"2e67e9955d9ecdf6a790b8cb4419aef48160e0e7799c0706a0f2b8ffe0393f27"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.917829 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" event={"ID":"5a7680b4-5a24-4bda-a8af-6d4d3949b969","Type":"ContainerStarted","Data":"0096af4683a6687fa3ff787013cd0c3603769499150876d4801f83365da35dbc"} Jan 26 15:36:38 crc kubenswrapper[4896]: I0126 15:36:38.933789 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:38 crc kubenswrapper[4896]: E0126 15:36:38.934190 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.434168942 +0000 UTC m=+157.216049335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.034859 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.035255 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.535231677 +0000 UTC m=+157.317112070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.119284 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" event={"ID":"de631e52-dcb0-49c6-8bd9-df4085c7ffcc","Type":"ContainerStarted","Data":"a5e8eba645a7cef2dac9e78434d9fb8c01f4a4e3f6cc5b132ccddfbd88ce56d4"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.119330 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" event={"ID":"de631e52-dcb0-49c6-8bd9-df4085c7ffcc","Type":"ContainerStarted","Data":"8e7c5c267214bd2966cddd7b7feab154c42a88fbacec0f7ae0f17ae518884fab"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.131191 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" event={"ID":"51353d35-53fc-4769-9870-6598a2df021c","Type":"ContainerStarted","Data":"2b2063a10148d405a5671a0c7376098d66f0ed11170163990e205ac9fee03cec"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.136631 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.137220 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.637204247 +0000 UTC m=+157.419084640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.137942 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:39 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:39 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:39 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.137980 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.141984 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rbmml" event={"ID":"a005fba8-0843-41a6-90eb-67a2aa6d0580","Type":"ContainerStarted","Data":"b9b3623cad3e97078af060a7c1e3a95bf1b92547236d86957f958252f74fd884"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.142030 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rbmml" event={"ID":"a005fba8-0843-41a6-90eb-67a2aa6d0580","Type":"ContainerStarted","Data":"cd7b86c420fad49a753b0158dd2d3c1f7ece396263c2b09c4f98255c9c6b5cd1"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.143768 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" event={"ID":"65d79adb-6464-4157-924d-ffadb4ed5d16","Type":"ContainerStarted","Data":"131fd89ae26e21f012a18824928eff43098cfe77a5065e7707d3a6a1e904edcf"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.146122 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" event={"ID":"fe23c111-8e9a-4456-8973-6aa7a78c52e6","Type":"ContainerStarted","Data":"d092bbce88cfb6f2e93af5ea38ca2425f77b5af2e1e8fee63bc36a11d38ec27a"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.155866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" event={"ID":"dff44ba1-22c6-42ed-9fc2-984240a9515e","Type":"ContainerStarted","Data":"a80ac1042a45d57db4241b97fe9b010aba118f8f1974f656f5b323009a537099"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.162765 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" event={"ID":"4cb0cc2a-b5c6-4599-bfea-59703789fb7b","Type":"ContainerStarted","Data":"33f0d7b7970425a9e186db30f2af004fb53e9367a6d1b87233a4eafc3c1c5de2"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.165601 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7n4vw" event={"ID":"eaabb754-7520-4b59-97bf-2d7ae577191c","Type":"ContainerStarted","Data":"db2f86af002d9908e8e883f437dc03c30564c9aa1ca1fec43ede82cef62e0319"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.167170 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" event={"ID":"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee","Type":"ContainerStarted","Data":"28a2cdbbe4d78ed5c4c3d8df6e1f7742716fddb06956157787811cdaf46bdbde"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.172093 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" event={"ID":"7032c8a7-9079-4063-bc82-a621052567ba","Type":"ContainerStarted","Data":"6a4632a35e44d0a5a63f038dfc26eb36c1b4083cb94a4b105db9552669f91051"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.189231 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" event={"ID":"37cb3473-29d9-40ae-be5a-5ee548397d58","Type":"ContainerStarted","Data":"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.189560 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" event={"ID":"37cb3473-29d9-40ae-be5a-5ee548397d58","Type":"ContainerStarted","Data":"88707bc9ea71b576c224171e5a7f689b0910b7612a85b4dbd1ee93fc73506b4c"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.190796 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.192597 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" event={"ID":"f2eced78-200a-47c3-b4d3-ea5be6867022","Type":"ContainerStarted","Data":"8791dd0b6592e0b60123923a68018bbaa774685bbd7c233b2f48202abc55a15e"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.201284 4896 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-772nw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.201336 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.215346 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" podStartSLOduration=134.215327558 podStartE2EDuration="2m14.215327558s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.214761032 +0000 UTC m=+156.996641425" watchObservedRunningTime="2026-01-26 15:36:39.215327558 +0000 UTC m=+156.997207951" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.218855 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z6479" event={"ID":"09601473-06d9-4938-876d-ea6e1b9ffc91","Type":"ContainerStarted","Data":"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.222132 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" event={"ID":"298f103b-bf7b-40db-ace2-2780e91fde2c","Type":"ContainerStarted","Data":"b53ac3cb2c4c0841717d5f43ccc359a74ea433ded3e27de7cae0ce18f5aaa2b9"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.237763 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" event={"ID":"1399e6aa-14ba-40e6-aec2-b3268e5b7102","Type":"ContainerStarted","Data":"17c87728829900d8b31f58c2d01d9718054bb3bba3eda4068cd19ea8de02af8e"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.237972 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.239126 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.739105684 +0000 UTC m=+157.520986077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.244304 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" event={"ID":"8757f648-a97f-4590-a332-6ee3bb30fa52","Type":"ContainerStarted","Data":"c113fdaf3169f08c9f5574f93d8106f176ed70c600af620791d47f02288cf88f"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.263642 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z6479" podStartSLOduration=135.26362389 podStartE2EDuration="2m15.26362389s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.252722588 +0000 UTC m=+157.034602981" watchObservedRunningTime="2026-01-26 15:36:39.26362389 +0000 UTC m=+157.045504283" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.269818 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-czxnh" podStartSLOduration=135.269801986 podStartE2EDuration="2m15.269801986s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.269667213 +0000 UTC m=+157.051547606" watchObservedRunningTime="2026-01-26 15:36:39.269801986 +0000 UTC m=+157.051682379" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.291065 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" event={"ID":"18973342-45ec-44b6-8456-c813d08240a3","Type":"ContainerStarted","Data":"204e3af090ab6256528e78b21a1aec1965be6c2377b65130c9793ed746c58aff"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.315874 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" event={"ID":"6793088e-6a2c-4abf-be95-b686e3904d1c","Type":"ContainerStarted","Data":"61813d9950ba6f186112051a191af34b42bae846f51b5aa76c7e7aeb5942da07"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.324370 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" event={"ID":"c733e876-d72a-4f1c-be58-d529f9807e61","Type":"ContainerStarted","Data":"3c677481f89571a9aa9ce02a66a693c8199c5de469dce2b85e3f99ad06ef9467"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.324414 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" event={"ID":"c733e876-d72a-4f1c-be58-d529f9807e61","Type":"ContainerStarted","Data":"6983e84a1342d51c86e8e253bcab0ad7e4f1c49cbd3614831b5fdc9448a41cdc"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.348528 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.348680 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.848658587 +0000 UTC m=+157.630538980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.348774 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.349833 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.849824228 +0000 UTC m=+157.631704621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.368101 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" event={"ID":"404d67ad-059b-4858-a767-2716ca48dfbc","Type":"ContainerStarted","Data":"8c59c9d13aede01d87af25659200df20280d4619bfafb9f48dc296c9b9aaa4ae"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.368152 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" event={"ID":"404d67ad-059b-4858-a767-2716ca48dfbc","Type":"ContainerStarted","Data":"aafc812bc7b945acee9db6630291b746724c1955c436a39fa735bd01eca95969"} Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.369235 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.402371 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-h47xb" podStartSLOduration=135.402351073 podStartE2EDuration="2m15.402351073s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.400980557 +0000 UTC m=+157.182860940" watchObservedRunningTime="2026-01-26 15:36:39.402351073 +0000 UTC m=+157.184231466" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.404902 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.450353 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.450502 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.950485612 +0000 UTC m=+157.732366005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.451099 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.451212 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w474z" podStartSLOduration=134.451194041 podStartE2EDuration="2m14.451194041s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.450727369 +0000 UTC m=+157.232607762" watchObservedRunningTime="2026-01-26 15:36:39.451194041 +0000 UTC m=+157.233074454" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.455857 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:39.955842295 +0000 UTC m=+157.737722688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.532149 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" podStartSLOduration=135.532127198 podStartE2EDuration="2m15.532127198s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:39.478902103 +0000 UTC m=+157.260782496" watchObservedRunningTime="2026-01-26 15:36:39.532127198 +0000 UTC m=+157.314007591" Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.556935 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.558101 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.058070591 +0000 UTC m=+157.839950984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.673649 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.674538 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.174519599 +0000 UTC m=+157.956399992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.776887 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.777105 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.277091754 +0000 UTC m=+158.058972147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.878409 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.879125 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.379110635 +0000 UTC m=+158.160991028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:39 crc kubenswrapper[4896]: I0126 15:36:39.979863 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:39 crc kubenswrapper[4896]: E0126 15:36:39.981760 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.481727261 +0000 UTC m=+158.263607654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.083240 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.094073 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.594052247 +0000 UTC m=+158.375932650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.149682 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:40 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:40 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:40 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.150028 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.184197 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.184908 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.684872258 +0000 UTC m=+158.466752651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.285440 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.286078 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.786057406 +0000 UTC m=+158.567937799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.387053 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.387239 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.887218224 +0000 UTC m=+158.669098627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.387963 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.388374 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.888363344 +0000 UTC m=+158.670243737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.488499 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.488795 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:40.988776622 +0000 UTC m=+158.770657015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.559517 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" event={"ID":"69ebaa33-5170-43dd-b2fb-9d77f487c938","Type":"ContainerStarted","Data":"d656c8167137c6c7d5415c48d9558c133613805999c9e2ef4be3e8af0cb756d9"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.560380 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.561612 4896 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cdvp4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.561660 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" podUID="69ebaa33-5170-43dd-b2fb-9d77f487c938" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.580292 4896 generic.go:334] "Generic (PLEG): container finished" podID="1399e6aa-14ba-40e6-aec2-b3268e5b7102" containerID="3ec00bdd95599af391da114b6b654fb7461b9271c27da7ff2409713ffcf0bbeb" exitCode=0 Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.580736 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" event={"ID":"1399e6aa-14ba-40e6-aec2-b3268e5b7102","Type":"ContainerDied","Data":"3ec00bdd95599af391da114b6b654fb7461b9271c27da7ff2409713ffcf0bbeb"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.592861 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.595848 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.095832218 +0000 UTC m=+158.877712611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.599159 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" podStartSLOduration=135.599139006 podStartE2EDuration="2m15.599139006s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:40.588544882 +0000 UTC m=+158.370425285" watchObservedRunningTime="2026-01-26 15:36:40.599139006 +0000 UTC m=+158.381019399" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.612810 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" event={"ID":"e1cbe94d-b2c9-4632-8a2b-1066967ed241","Type":"ContainerStarted","Data":"a95a08ce5e021667b3400a5449a6db4966235fa79dfc0a43f1cc8c96b3d6a4f7"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.613772 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.620727 4896 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-k45bj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.620784 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.627377 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" event={"ID":"65d79adb-6464-4157-924d-ffadb4ed5d16","Type":"ContainerStarted","Data":"39f00b6bef75894739970c5462b211c173e8fa2e5c89b0f0965c876c5bd0af6d"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.672109 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gl252" event={"ID":"10c61de8-b81d-44ba-a406-d4a5c453a464","Type":"ContainerStarted","Data":"214799c7546cb5982463b10e645bc3f1bc9a884f0ee78e5aed8550c5fa9b595f"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.682693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" event={"ID":"dff44ba1-22c6-42ed-9fc2-984240a9515e","Type":"ContainerStarted","Data":"acf18af03e129446f084a707be42ddf59c264a7e9546ec2e1955319bea89ed3f"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.684962 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" event={"ID":"f6347e55-b670-46f1-aff6-f30e10c492f4","Type":"ContainerStarted","Data":"748bf82cf6bc9ad3b998326d7f6901b4bf9bc1693eda7128af48c7c1c928a11c"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.700699 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.701310 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.2012674 +0000 UTC m=+158.983147853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.707171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" event={"ID":"c733e876-d72a-4f1c-be58-d529f9807e61","Type":"ContainerStarted","Data":"7c0ada78cf7a404339f07979e03a4991d943297b4eabc5c8d58a6ad9472e0d00"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.708944 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gnv9" podStartSLOduration=135.708931565 podStartE2EDuration="2m15.708931565s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:40.699941504 +0000 UTC m=+158.481821897" watchObservedRunningTime="2026-01-26 15:36:40.708931565 +0000 UTC m=+158.490811958" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.709633 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7n4vw" event={"ID":"eaabb754-7520-4b59-97bf-2d7ae577191c","Type":"ContainerStarted","Data":"b0728fe98ec80471247bcb7356433c97a22692c54d6acdb0231f6a8e7717deda"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.748064 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" event={"ID":"3a8b421e-f755-4bc6-89f7-03aa4a309a87","Type":"ContainerStarted","Data":"a83eafc9237b1296ace155efcab7012d3d020a882fffe7140ecfef05f9f58429"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.749173 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.756626 4896 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw2tr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.756825 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podUID="3a8b421e-f755-4bc6-89f7-03aa4a309a87" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.831980 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.837715 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.337700919 +0000 UTC m=+159.119581312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.863516 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" event={"ID":"6793088e-6a2c-4abf-be95-b686e3904d1c","Type":"ContainerStarted","Data":"4a87d6feb8a5608e573c81c713231f4fc20c418cb04aae6342a831f69eb31eef"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.880880 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" podStartSLOduration=136.880860206 podStartE2EDuration="2m16.880860206s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:40.746397868 +0000 UTC m=+158.528278261" watchObservedRunningTime="2026-01-26 15:36:40.880860206 +0000 UTC m=+158.662740599" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.881917 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5wn9k" podStartSLOduration=136.881910195 podStartE2EDuration="2m16.881910195s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:40.880703772 +0000 UTC m=+158.662584185" watchObservedRunningTime="2026-01-26 15:36:40.881910195 +0000 UTC m=+158.663790598" Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.881323 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" event={"ID":"0752f58c-f532-48fb-b192-30c2f8614059","Type":"ContainerStarted","Data":"e25da45d1d3fb995161ca6d5f3c14d47f8cd272e2f4366f6a4a97501337de1ce"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.913500 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" event={"ID":"f6d49f0f-270b-4a18-827f-e3cb5a9fb202","Type":"ContainerStarted","Data":"bee538e0446d3faeddf761c5a7581423efc79be5ab5022d26f61bc429e501e23"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.913798 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" event={"ID":"f6d49f0f-270b-4a18-827f-e3cb5a9fb202","Type":"ContainerStarted","Data":"93fd0a9e3c89331dde7b5b9ad5ee9dad0b379f5ef39912392c4de84657b6a351"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.941352 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:40 crc kubenswrapper[4896]: E0126 15:36:40.942883 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.442861907 +0000 UTC m=+159.224742300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.974641 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n6zp9" event={"ID":"7ce1a770-33b9-4639-bc56-b93bf8627884","Type":"ContainerStarted","Data":"7e822ef32ec441fa988303c8c092e0851a3d091c5b78d4681007191452f4fcfb"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.996223 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" event={"ID":"51353d35-53fc-4769-9870-6598a2df021c","Type":"ContainerStarted","Data":"7b27842108f71299608306a972cf2a767544e77feec3955084ef973ddc556b79"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.996264 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" event={"ID":"51353d35-53fc-4769-9870-6598a2df021c","Type":"ContainerStarted","Data":"267d2094bc00c5b2b350f4a000c9287adbe777440d458a89d4d2637275dd3a14"} Jan 26 15:36:40 crc kubenswrapper[4896]: I0126 15:36:40.997967 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" event={"ID":"298f103b-bf7b-40db-ace2-2780e91fde2c","Type":"ContainerStarted","Data":"dc06d91128c85ab2035f39222d157e79e096f8858b75878b1c1d81145392357a"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.013859 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" event={"ID":"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54","Type":"ContainerStarted","Data":"a9d7add6ccae2a968126e0ab83de8420c5e5d671d98ebd7b8ef0c4bb33a9b4dc"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.019112 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" event={"ID":"7032c8a7-9079-4063-bc82-a621052567ba","Type":"ContainerStarted","Data":"49a0763f59b542ccc4b122dc2cf49e073db0f9276b0dc5cf937e54e383b7ba1c"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.021142 4896 generic.go:334] "Generic (PLEG): container finished" podID="930ffea7-a937-407b-ae73-9b22885a6aad" containerID="18c386798aff858d6630842b234f2f16d284b7bb403bb3caaeb547a99f908d98" exitCode=0 Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.021180 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" event={"ID":"930ffea7-a937-407b-ae73-9b22885a6aad","Type":"ContainerDied","Data":"18c386798aff858d6630842b234f2f16d284b7bb403bb3caaeb547a99f908d98"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.039826 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" event={"ID":"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5","Type":"ContainerStarted","Data":"f7dd80a3539f447f6bd00ecda01b3d277d4af869eb7c1ce2c8a28d9b8a06ece2"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.040397 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.045677 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" event={"ID":"18973342-45ec-44b6-8456-c813d08240a3","Type":"ContainerStarted","Data":"2eb113e217d44d03ff2f8e3a311041885c3e6c46a282cd187722b0f41e96b499"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.045907 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.047393 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.547378034 +0000 UTC m=+159.329258497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.054927 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" event={"ID":"fe23c111-8e9a-4456-8973-6aa7a78c52e6","Type":"ContainerStarted","Data":"e77610d5cb640cdc11aab98607c40211ebddce044fe8e3ff2b3a3c05ee4ff67d"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.078179 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" event={"ID":"f98717ba-8501-4233-9959-efbd73aabbd9","Type":"ContainerStarted","Data":"6fdd25157e286d4fcbb4ace48c988af20f1cb1d0cb7023face881bc9d64a23df"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.100962 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" event={"ID":"c0c7df3f-b79d-4bd5-b67d-79f785c8ffee","Type":"ContainerStarted","Data":"137aa649725a806e49c04cfa20977589e1674f89e7e5ef991233b2fd740df0bb"} Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.103470 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.107265 4896 patch_prober.go:28] interesting pod/console-operator-58897d9998-rhdrn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.107309 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" podUID="de631e52-dcb0-49c6-8bd9-df4085c7ffcc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.123551 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.124349 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk9lz" podStartSLOduration=136.124339834 podStartE2EDuration="2m16.124339834s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:40.974043121 +0000 UTC m=+158.755923514" watchObservedRunningTime="2026-01-26 15:36:41.124339834 +0000 UTC m=+158.906220227" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.125242 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gl252" podStartSLOduration=136.125229828 podStartE2EDuration="2m16.125229828s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.115754754 +0000 UTC m=+158.897635147" watchObservedRunningTime="2026-01-26 15:36:41.125229828 +0000 UTC m=+158.907110221" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.143470 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:41 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:41 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:41 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.143719 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.148417 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.154343 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.654322967 +0000 UTC m=+159.436203360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.246630 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wwnjn" podStartSLOduration=136.246613106 podStartE2EDuration="2m16.246613106s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.245188018 +0000 UTC m=+159.027068411" watchObservedRunningTime="2026-01-26 15:36:41.246613106 +0000 UTC m=+159.028493499" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.256372 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.256730 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.756710457 +0000 UTC m=+159.538590850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.291469 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-66hlb" podStartSLOduration=136.291451456 podStartE2EDuration="2m16.291451456s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.284939513 +0000 UTC m=+159.066819906" watchObservedRunningTime="2026-01-26 15:36:41.291451456 +0000 UTC m=+159.073331849" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.324176 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rbmml" podStartSLOduration=137.324160043 podStartE2EDuration="2m17.324160043s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.32293262 +0000 UTC m=+159.104813013" watchObservedRunningTime="2026-01-26 15:36:41.324160043 +0000 UTC m=+159.106040436" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.357944 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.358143 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.858111121 +0000 UTC m=+159.639991514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.358274 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.358526 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.858515472 +0000 UTC m=+159.640395865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.423733 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" podStartSLOduration=137.423713577 podStartE2EDuration="2m17.423713577s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.369626039 +0000 UTC m=+159.151506432" watchObservedRunningTime="2026-01-26 15:36:41.423713577 +0000 UTC m=+159.205593990" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.459138 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.459423 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:41.959408922 +0000 UTC m=+159.741289315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.514397 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6ldjd" podStartSLOduration=137.514377624 podStartE2EDuration="2m17.514377624s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.509980386 +0000 UTC m=+159.291860779" watchObservedRunningTime="2026-01-26 15:36:41.514377624 +0000 UTC m=+159.296258017" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.514740 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" podStartSLOduration=136.514733453 podStartE2EDuration="2m16.514733453s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.484542414 +0000 UTC m=+159.266422807" watchObservedRunningTime="2026-01-26 15:36:41.514733453 +0000 UTC m=+159.296613846" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.561097 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.561419 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.061406302 +0000 UTC m=+159.843286695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.578823 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ckpmm" podStartSLOduration=137.578807198 podStartE2EDuration="2m17.578807198s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.577601526 +0000 UTC m=+159.359481929" watchObservedRunningTime="2026-01-26 15:36:41.578807198 +0000 UTC m=+159.360687591" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.639348 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" podStartSLOduration=136.639331998 podStartE2EDuration="2m16.639331998s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.6363944 +0000 UTC m=+159.418274783" watchObservedRunningTime="2026-01-26 15:36:41.639331998 +0000 UTC m=+159.421212391" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.639748 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" podStartSLOduration=137.639743039 podStartE2EDuration="2m17.639743039s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.609560261 +0000 UTC m=+159.391440664" watchObservedRunningTime="2026-01-26 15:36:41.639743039 +0000 UTC m=+159.421623432" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.662264 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.662557 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.1625417 +0000 UTC m=+159.944422093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.676059 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wg2jv" podStartSLOduration=136.67603061 podStartE2EDuration="2m16.67603061s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.673947555 +0000 UTC m=+159.455827958" watchObservedRunningTime="2026-01-26 15:36:41.67603061 +0000 UTC m=+159.457911003" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.765285 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.765664 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.265652629 +0000 UTC m=+160.047533012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.773712 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podStartSLOduration=136.773694004 podStartE2EDuration="2m16.773694004s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.730075887 +0000 UTC m=+159.511956270" watchObservedRunningTime="2026-01-26 15:36:41.773694004 +0000 UTC m=+159.555574397" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.774902 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h4hvh" podStartSLOduration=136.774897416 podStartE2EDuration="2m16.774897416s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.772980475 +0000 UTC m=+159.554860878" watchObservedRunningTime="2026-01-26 15:36:41.774897416 +0000 UTC m=+159.556777809" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.823351 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ln65f" podStartSLOduration=136.823304992 podStartE2EDuration="2m16.823304992s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.821894034 +0000 UTC m=+159.603774427" watchObservedRunningTime="2026-01-26 15:36:41.823304992 +0000 UTC m=+159.605185405" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.870052 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.870509 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.370488995 +0000 UTC m=+160.152369388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.924807 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j67k8" podStartSLOduration=136.924793689 podStartE2EDuration="2m16.924793689s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:41.922662671 +0000 UTC m=+159.704543064" watchObservedRunningTime="2026-01-26 15:36:41.924793689 +0000 UTC m=+159.706674082" Jan 26 15:36:41 crc kubenswrapper[4896]: I0126 15:36:41.972173 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:41 crc kubenswrapper[4896]: E0126 15:36:41.972490 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.472476035 +0000 UTC m=+160.254356428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.047702 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" podStartSLOduration=137.047690028 podStartE2EDuration="2m17.047690028s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.045684794 +0000 UTC m=+159.827565187" watchObservedRunningTime="2026-01-26 15:36:42.047690028 +0000 UTC m=+159.829570421" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.073198 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.073552 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.57353848 +0000 UTC m=+160.355418873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.105850 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" podStartSLOduration=137.105831314 podStartE2EDuration="2m17.105831314s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.068369891 +0000 UTC m=+159.850250284" watchObservedRunningTime="2026-01-26 15:36:42.105831314 +0000 UTC m=+159.887711707" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.108705 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" event={"ID":"f2eced78-200a-47c3-b4d3-ea5be6867022","Type":"ContainerStarted","Data":"941fd6077daf9297cf44dd018fc09ad30af545b4aaa7a75d5520ef8c65e1a6d8"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.110611 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" event={"ID":"1399e6aa-14ba-40e6-aec2-b3268e5b7102","Type":"ContainerStarted","Data":"4491cecb6a3cf8ba927667da4e0c10bcebffe548f93870755e8421475c1876f6"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.139680 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sw5mq" event={"ID":"18973342-45ec-44b6-8456-c813d08240a3","Type":"ContainerStarted","Data":"adbbf334adc41e16dd4616bc6ab51f680aa26e3e183d28cf8da4b2716095e89a"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.142164 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:42 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:42 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:42 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.142200 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.149006 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" event={"ID":"fe23c111-8e9a-4456-8973-6aa7a78c52e6","Type":"ContainerStarted","Data":"1c2a831f13326742931721923dcf45b183d0735a94eeaa0f096fc51423d056ec"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.152390 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" event={"ID":"930ffea7-a937-407b-ae73-9b22885a6aad","Type":"ContainerStarted","Data":"c6cb48346fbfd2bc64605527daae8789c4fd723fd1c6eb29ed786d7c8937f65f"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.152967 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.154904 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" event={"ID":"61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5","Type":"ContainerStarted","Data":"698c08a54be866fd8e90154f47fe1106b2554c7d9ef91a75a32afc70d9f9ed9b"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.164417 4896 generic.go:334] "Generic (PLEG): container finished" podID="317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0" containerID="04ee448b4d771f79a41b2f2f3e2694cd0f55551423e7943172d8dcf889576adc" exitCode=0 Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.164516 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" event={"ID":"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0","Type":"ContainerDied","Data":"04ee448b4d771f79a41b2f2f3e2694cd0f55551423e7943172d8dcf889576adc"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.174155 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.175906 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.675889409 +0000 UTC m=+160.457769802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.181554 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" event={"ID":"4cb0cc2a-b5c6-4599-bfea-59703789fb7b","Type":"ContainerStarted","Data":"f51562139d93a4264f2c93800ecf5dd38e6368b1a47f5f6632d507b9c3690b13"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.187779 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n6zp9" podStartSLOduration=10.187757408 podStartE2EDuration="10.187757408s" podCreationTimestamp="2026-01-26 15:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.10719096 +0000 UTC m=+159.889071353" watchObservedRunningTime="2026-01-26 15:36:42.187757408 +0000 UTC m=+159.969637811" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.205933 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7n4vw" event={"ID":"eaabb754-7520-4b59-97bf-2d7ae577191c","Type":"ContainerStarted","Data":"3ae4c751bf89875a457240427dc933e6536ac9e8781436149bb3d5340c7e95c5"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.206718 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.210300 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnntw" event={"ID":"f98717ba-8501-4233-9959-efbd73aabbd9","Type":"ContainerStarted","Data":"167ba50c653961fbb201533ad722ebfce689b0b960b7d9d8842c169d08723459"} Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.216429 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.246441 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.287432 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.289661 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.789641364 +0000 UTC m=+160.571521757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.391310 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.391824 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.891811268 +0000 UTC m=+160.673691651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.417850 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6jkc" podStartSLOduration=138.417835975 podStartE2EDuration="2m18.417835975s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.2069395 +0000 UTC m=+159.988819893" watchObservedRunningTime="2026-01-26 15:36:42.417835975 +0000 UTC m=+160.199716368" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.419349 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" podStartSLOduration=137.419342506 podStartE2EDuration="2m17.419342506s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.416700095 +0000 UTC m=+160.198580488" watchObservedRunningTime="2026-01-26 15:36:42.419342506 +0000 UTC m=+160.201222899" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.445899 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cdvp4" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.492264 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.492569 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:42.992554945 +0000 UTC m=+160.774435338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.594546 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.594931 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.094918785 +0000 UTC m=+160.876799178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.639693 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" podStartSLOduration=138.639672713 podStartE2EDuration="2m18.639672713s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.601963203 +0000 UTC m=+160.383843596" watchObservedRunningTime="2026-01-26 15:36:42.639672713 +0000 UTC m=+160.421553106" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.686535 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2p7md" podStartSLOduration=137.686515427 podStartE2EDuration="2m17.686515427s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.644949634 +0000 UTC m=+160.426830027" watchObservedRunningTime="2026-01-26 15:36:42.686515427 +0000 UTC m=+160.468395820" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.694306 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rhdrn" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.695883 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.696150 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.196137115 +0000 UTC m=+160.978017508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.723662 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7n4vw" podStartSLOduration=10.723648260000001 podStartE2EDuration="10.72364826s" podCreationTimestamp="2026-01-26 15:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.68774047 +0000 UTC m=+160.469620863" watchObservedRunningTime="2026-01-26 15:36:42.72364826 +0000 UTC m=+160.505528653" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.724827 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-p8n8h" podStartSLOduration=138.724822872 podStartE2EDuration="2m18.724822872s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:42.722726486 +0000 UTC m=+160.504606879" watchObservedRunningTime="2026-01-26 15:36:42.724822872 +0000 UTC m=+160.506703265" Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.798443 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.798801 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.298775442 +0000 UTC m=+161.080655835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.899669 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:42 crc kubenswrapper[4896]: E0126 15:36:42.900047 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.400024761 +0000 UTC m=+161.181905154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:42 crc kubenswrapper[4896]: I0126 15:36:42.921912 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.001203 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.001612 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.50159455 +0000 UTC m=+161.283474943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.102476 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.102909 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.602889151 +0000 UTC m=+161.384769544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.137696 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:43 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:43 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:43 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.137770 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.203939 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.204266 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.704254725 +0000 UTC m=+161.486135118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.217201 4896 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x7ndr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.217271 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" podUID="c0c7df3f-b79d-4bd5-b67d-79f785c8ffee" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.223798 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" event={"ID":"317cbfb5-64c8-49fa-8a4b-7cf84cc61ab0","Type":"ContainerStarted","Data":"822ffc37544c7bf8c8523d6857a1bec38c27de9b07d0033f1e90ff341222839a"} Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.226312 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" event={"ID":"1399e6aa-14ba-40e6-aec2-b3268e5b7102","Type":"ContainerStarted","Data":"eb3e9275bc2c239f94a7f598f4d2f4d3e945eacfa2ba56a9d477a4a546e567b5"} Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.229083 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" event={"ID":"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54","Type":"ContainerStarted","Data":"621e2983a84a5c2f1dcd0f28e7ec82cc102f5ec1f710ccf92cda8b42d3dae027"} Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.263457 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" podStartSLOduration=138.263438639 podStartE2EDuration="2m18.263438639s" podCreationTimestamp="2026-01-26 15:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:43.260843499 +0000 UTC m=+161.042723902" watchObservedRunningTime="2026-01-26 15:36:43.263438639 +0000 UTC m=+161.045319022" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.277838 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x7ndr" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.304983 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.305139 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.805121075 +0000 UTC m=+161.587001468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.305558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.306809 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.806789139 +0000 UTC m=+161.588669612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.307360 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" podStartSLOduration=139.307342143 podStartE2EDuration="2m19.307342143s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:43.306929303 +0000 UTC m=+161.088809716" watchObservedRunningTime="2026-01-26 15:36:43.307342143 +0000 UTC m=+161.089222536" Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.407176 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.407665 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.907649399 +0000 UTC m=+161.689529792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.407781 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.408025 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:43.908018588 +0000 UTC m=+161.689898981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.512108 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.512408 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.012393922 +0000 UTC m=+161.794274315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.614110 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.614479 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.114465545 +0000 UTC m=+161.896345938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.721445 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.721885 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.221864978 +0000 UTC m=+162.003745381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.823596 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.824002 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.323972731 +0000 UTC m=+162.105853124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:43 crc kubenswrapper[4896]: I0126 15:36:43.925043 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:43 crc kubenswrapper[4896]: E0126 15:36:43.925501 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.425481719 +0000 UTC m=+162.207362112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.026868 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.027193 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.52718216 +0000 UTC m=+162.309062543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.035856 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.036783 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: W0126 15:36:44.047450 4896 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.047498 4896 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.097953 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.134225 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.134841 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wg4g\" (UniqueName: \"kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.134886 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.134912 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.135032 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.635013266 +0000 UTC m=+162.416893669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.147929 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:44 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:44 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:44 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.147989 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.240733 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.240771 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wg4g\" (UniqueName: \"kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.240793 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.240809 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.241263 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.241517 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.741506007 +0000 UTC m=+162.523386400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.241820 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.269691 4896 generic.go:334] "Generic (PLEG): container finished" podID="298f103b-bf7b-40db-ace2-2780e91fde2c" containerID="dc06d91128c85ab2035f39222d157e79e096f8858b75878b1c1d81145392357a" exitCode=0 Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.269749 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" event={"ID":"298f103b-bf7b-40db-ace2-2780e91fde2c","Type":"ContainerDied","Data":"dc06d91128c85ab2035f39222d157e79e096f8858b75878b1c1d81145392357a"} Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.269773 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.270745 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.286630 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" event={"ID":"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54","Type":"ContainerStarted","Data":"85d4525ef3c0a2fd256e0c3de3a95a2197fc053e180c23c1be195861d361fe85"} Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.302024 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.304052 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.304101 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.309037 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tvfv" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.310519 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.336489 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wg4g\" (UniqueName: \"kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g\") pod \"certified-operators-sldms\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.352225 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.352614 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.852594071 +0000 UTC m=+162.634474464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.361123 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.364537 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.393525 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.425602 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.425940 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.430140 4896 patch_prober.go:28] interesting pod/console-f9d7485db-z6479 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.430185 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z6479" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455168 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455198 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qw4t\" (UniqueName: \"kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455214 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455291 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wcl\" (UniqueName: \"kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455329 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455669 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.455717 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.484561 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:44.984543093 +0000 UTC m=+162.766423486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.550597 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.551491 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.561642 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.561804 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.561844 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.561861 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qw4t\" (UniqueName: \"kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.562103 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.062080258 +0000 UTC m=+162.843960651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562151 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562189 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wcl\" (UniqueName: \"kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562215 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562292 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562445 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.562795 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.563450 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.563701 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.063688351 +0000 UTC m=+162.845568744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.565840 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.573185 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.586388 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wcl\" (UniqueName: \"kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl\") pod \"community-operators-w5d62\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.601620 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qw4t\" (UniqueName: \"kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t\") pod \"certified-operators-md672\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.663122 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.663252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.663403 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.163297907 +0000 UTC m=+162.945178300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.663441 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.663508 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfb5\" (UniqueName: \"kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.663634 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.663962 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.163948554 +0000 UTC m=+162.945828947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764057 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764166 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764195 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764219 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfb5\" (UniqueName: \"kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.764552 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.264538247 +0000 UTC m=+163.046418640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764909 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.764966 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.774557 4896 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.782209 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfb5\" (UniqueName: \"kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5\") pod \"community-operators-p5g7v\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.865338 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.865738 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.365721815 +0000 UTC m=+163.147602208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.879380 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.885991 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.907603 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911056 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911087 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911105 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911135 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911349 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.911365 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.972412 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.972595 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.472557095 +0000 UTC m=+163.254437488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:44 crc kubenswrapper[4896]: I0126 15:36:44.973394 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:44 crc kubenswrapper[4896]: E0126 15:36:44.973747 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.473739476 +0000 UTC m=+163.255619869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.074647 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.074896 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.574881524 +0000 UTC m=+163.356761917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.111457 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.117445 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.120035 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.134406 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.145923 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:45 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:45 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:45 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.146305 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.175983 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.177292 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.677274914 +0000 UTC m=+163.459155307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.194280 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.199546 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.332426 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.333784 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.833764793 +0000 UTC m=+163.615645196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.366882 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.366926 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.374921 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" event={"ID":"2496d14f-9aa6-4ee7-9db9-5bca63fa5a54","Type":"ContainerStarted","Data":"fc73ca28f9dbd5d6155758bfbd8f998dc68a92949541c294a433b88311e0ba1b"} Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.431065 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" podStartSLOduration=13.431050787 podStartE2EDuration="13.431050787s" podCreationTimestamp="2026-01-26 15:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:45.430893963 +0000 UTC m=+163.212774366" watchObservedRunningTime="2026-01-26 15:36:45.431050787 +0000 UTC m=+163.212931180" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.433561 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.434422 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.436526 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:45.936514313 +0000 UTC m=+163.718394706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.510915 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.534832 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.535244 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 15:36:46.035224815 +0000 UTC m=+163.817105208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.636162 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: E0126 15:36:45.636489 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 15:36:46.136477345 +0000 UTC m=+163.918357738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9sc6" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.636863 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.665658 4896 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T15:36:44.774618816Z","Handler":null,"Name":""} Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.669975 4896 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gb6wx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]log ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]etcd ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/max-in-flight-filter ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 15:36:45 crc kubenswrapper[4896]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 15:36:45 crc kubenswrapper[4896]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/openshift.io-startinformers ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 15:36:45 crc kubenswrapper[4896]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 15:36:45 crc kubenswrapper[4896]: livez check failed Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.670034 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" podUID="1399e6aa-14ba-40e6-aec2-b3268e5b7102" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.701621 4896 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.702418 4896 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.713430 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.738698 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.746170 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.846670 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.851175 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.851298 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.889237 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.890712 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9sc6\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: W0126 15:36:45.926688 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3369383_b89e_4cc5_8267_3f849ff0c294.slice/crio-4f913973ff399161740e10bcab6c7d47bac7b3f93d239eec57b4109838ac1d74 WatchSource:0}: Error finding container 4f913973ff399161740e10bcab6c7d47bac7b3f93d239eec57b4109838ac1d74: Status 404 returned error can't find the container with id 4f913973ff399161740e10bcab6c7d47bac7b3f93d239eec57b4109838ac1d74 Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.930830 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.946916 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.948033 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.948107 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.950655 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:36:45 crc kubenswrapper[4896]: I0126 15:36:45.955536 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.066924 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume\") pod \"298f103b-bf7b-40db-ace2-2780e91fde2c\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067290 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume\") pod \"298f103b-bf7b-40db-ace2-2780e91fde2c\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067360 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvjrr\" (UniqueName: \"kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr\") pod \"298f103b-bf7b-40db-ace2-2780e91fde2c\" (UID: \"298f103b-bf7b-40db-ace2-2780e91fde2c\") " Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067511 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pcx8\" (UniqueName: \"kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067609 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067668 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.067770 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume" (OuterVolumeSpecName: "config-volume") pod "298f103b-bf7b-40db-ace2-2780e91fde2c" (UID: "298f103b-bf7b-40db-ace2-2780e91fde2c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.131007 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr" (OuterVolumeSpecName: "kube-api-access-nvjrr") pod "298f103b-bf7b-40db-ace2-2780e91fde2c" (UID: "298f103b-bf7b-40db-ace2-2780e91fde2c"). InnerVolumeSpecName "kube-api-access-nvjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.134736 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "298f103b-bf7b-40db-ace2-2780e91fde2c" (UID: "298f103b-bf7b-40db-ace2-2780e91fde2c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.151702 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:46 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:46 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:46 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.151758 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168500 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168603 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168645 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pcx8\" (UniqueName: \"kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168708 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvjrr\" (UniqueName: \"kubernetes.io/projected/298f103b-bf7b-40db-ace2-2780e91fde2c-kube-api-access-nvjrr\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168722 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298f103b-bf7b-40db-ace2-2780e91fde2c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.168735 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/298f103b-bf7b-40db-ace2-2780e91fde2c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.169446 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.169686 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.194664 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pcx8\" (UniqueName: \"kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8\") pod \"redhat-marketplace-4f48w\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.273757 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.369650 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:36:46 crc kubenswrapper[4896]: E0126 15:36:46.370144 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298f103b-bf7b-40db-ace2-2780e91fde2c" containerName="collect-profiles" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.370167 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="298f103b-bf7b-40db-ace2-2780e91fde2c" containerName="collect-profiles" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.370320 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="298f103b-bf7b-40db-ace2-2780e91fde2c" containerName="collect-profiles" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.371134 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.381935 4896 generic.go:334] "Generic (PLEG): container finished" podID="72a04b75-e07c-4137-b917-928b40745f65" containerID="d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482" exitCode=0 Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.382006 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerDied","Data":"d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.382033 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerStarted","Data":"dcfa4e3bfdc0618b1738994efe0127bc52d546f86a07b9579997fe4fa073f936"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.387338 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.390315 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.400245 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.400928 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m" event={"ID":"298f103b-bf7b-40db-ace2-2780e91fde2c","Type":"ContainerDied","Data":"b53ac3cb2c4c0841717d5f43ccc359a74ea433ded3e27de7cae0ce18f5aaa2b9"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.400988 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53ac3cb2c4c0841717d5f43ccc359a74ea433ded3e27de7cae0ce18f5aaa2b9" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.424672 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.429704 4896 generic.go:334] "Generic (PLEG): container finished" podID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerID="8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73" exitCode=0 Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.429789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerDied","Data":"8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.429817 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerStarted","Data":"41cb9d30aca5400934f8ee96882526d70536a06dcafa2bf3d4789c83af3f294d"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.438939 4896 generic.go:334] "Generic (PLEG): container finished" podID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerID="84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652" exitCode=0 Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.439023 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerDied","Data":"84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.440049 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerStarted","Data":"61f9fa0580c4444b9e17e0d7bf46494fe85762a1883113d27a974705eb2a2a52"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.449715 4896 generic.go:334] "Generic (PLEG): container finished" podID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerID="9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6" exitCode=0 Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.450779 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerDied","Data":"9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.450810 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerStarted","Data":"4f913973ff399161740e10bcab6c7d47bac7b3f93d239eec57b4109838ac1d74"} Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.456303 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hwqlq" Jan 26 15:36:46 crc kubenswrapper[4896]: W0126 15:36:46.458285 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8428c0c6_79c5_46d3_a6eb_5126303dfd60.slice/crio-c2dd15f36168c74e6671401efb0d6ad97c59939ea5a433b4cc8d5c10e0984f5e WatchSource:0}: Error finding container c2dd15f36168c74e6671401efb0d6ad97c59939ea5a433b4cc8d5c10e0984f5e: Status 404 returned error can't find the container with id c2dd15f36168c74e6671401efb0d6ad97c59939ea5a433b4cc8d5c10e0984f5e Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.472784 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.472842 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvb7\" (UniqueName: \"kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.472895 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.574107 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfvb7\" (UniqueName: \"kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.574234 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.574518 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.576080 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.581250 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.600933 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfvb7\" (UniqueName: \"kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7\") pod \"redhat-marketplace-gvwsc\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.723532 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.808227 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 15:36:46 crc kubenswrapper[4896]: I0126 15:36:46.867026 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.183736 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:47 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:47 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:47 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.183847 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.203705 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.206141 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.209222 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.215689 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.238126 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:36:47 crc kubenswrapper[4896]: W0126 15:36:47.268472 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb359163c_745c_4adf_97ff_872ee69ae3e5.slice/crio-82353a8e5f8facfd94be5da72f948ac7ccdfb44a613fd10f74ae6c7cf4ad3cea WatchSource:0}: Error finding container 82353a8e5f8facfd94be5da72f948ac7ccdfb44a613fd10f74ae6c7cf4ad3cea: Status 404 returned error can't find the container with id 82353a8e5f8facfd94be5da72f948ac7ccdfb44a613fd10f74ae6c7cf4ad3cea Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.295073 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.295223 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nttsk\" (UniqueName: \"kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.295264 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.396531 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nttsk\" (UniqueName: \"kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.396972 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.397001 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.397501 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.397536 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.432369 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nttsk\" (UniqueName: \"kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk\") pod \"redhat-operators-l2v2b\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.476356 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerStarted","Data":"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.476405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerStarted","Data":"82353a8e5f8facfd94be5da72f948ac7ccdfb44a613fd10f74ae6c7cf4ad3cea"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.480041 4896 generic.go:334] "Generic (PLEG): container finished" podID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerID="c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67" exitCode=0 Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.480133 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerDied","Data":"c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.480158 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerStarted","Data":"c78677971af83a1b5988d3a7f5ae10e86903483396fef083624bbd5d4d7e430c"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.484716 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" event={"ID":"8428c0c6-79c5-46d3-a6eb-5126303dfd60","Type":"ContainerStarted","Data":"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.484761 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" event={"ID":"8428c0c6-79c5-46d3-a6eb-5126303dfd60","Type":"ContainerStarted","Data":"c2dd15f36168c74e6671401efb0d6ad97c59939ea5a433b4cc8d5c10e0984f5e"} Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.484843 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.500728 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.504848 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fbeb890e-90af-4b15-a106-27b03465209f-metrics-certs\") pod \"network-metrics-daemon-klrrb\" (UID: \"fbeb890e-90af-4b15-a106-27b03465209f\") " pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.521559 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" podStartSLOduration=143.52153923 podStartE2EDuration="2m23.52153923s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:47.51781349 +0000 UTC m=+165.299693903" watchObservedRunningTime="2026-01-26 15:36:47.52153923 +0000 UTC m=+165.303419623" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.604093 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.615178 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.616523 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.633892 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.706303 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpwb\" (UniqueName: \"kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.706352 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.706404 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.774061 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-klrrb" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.810162 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.815915 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.816286 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpwb\" (UniqueName: \"kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.816709 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.816916 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.846337 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpwb\" (UniqueName: \"kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb\") pod \"redhat-operators-glz7z\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.916198 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.917175 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.925271 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.925608 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.929028 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:47 crc kubenswrapper[4896]: I0126 15:36:47.968447 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.027650 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.027699 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.112623 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.128996 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.129045 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.129212 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.166294 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:48 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:48 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:48 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.166349 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.187241 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.291924 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.394458 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.499979 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-klrrb"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.650857 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.666595 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerStarted","Data":"ecdc6137a6237b61890db251cb61db9df5f135c059f51c490f54db8f412c3b78"} Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.669280 4896 generic.go:334] "Generic (PLEG): container finished" podID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerID="c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7" exitCode=0 Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.669332 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerDied","Data":"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7"} Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.688341 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerStarted","Data":"50f882b4540e4cf43dc0cb405da6ad4a6309ca715a836dd6a143bbdf7963c698"} Jan 26 15:36:48 crc kubenswrapper[4896]: W0126 15:36:48.688923 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbeb890e_90af_4b15_a106_27b03465209f.slice/crio-a96221390774fe3988b65ee1910ef06ec4932b4f6683fc6279f4ab48527e63c8 WatchSource:0}: Error finding container a96221390774fe3988b65ee1910ef06ec4932b4f6683fc6279f4ab48527e63c8: Status 404 returned error can't find the container with id a96221390774fe3988b65ee1910ef06ec4932b4f6683fc6279f4ab48527e63c8 Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.813908 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.813972 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.829123 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.829960 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.833817 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.894031 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.894315 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.895493 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.895721 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.996858 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.997269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:48 crc kubenswrapper[4896]: I0126 15:36:48.997369 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.023469 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.074071 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.157772 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:49 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:49 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:49 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.157844 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.310557 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.324744 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gb6wx" Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.633361 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 15:36:49 crc kubenswrapper[4896]: W0126 15:36:49.650140 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3c4c030f_c3a2_4550_a1aa_d38eb629081b.slice/crio-e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125 WatchSource:0}: Error finding container e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125: Status 404 returned error can't find the container with id e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125 Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.707324 4896 generic.go:334] "Generic (PLEG): container finished" podID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerID="9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811" exitCode=0 Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.707411 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerDied","Data":"9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.715160 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8","Type":"ContainerStarted","Data":"6eebdc24e087539654c25b7edca02bc3dd1688832f9a3613f29b1ddb7ecf68fc"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.715204 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8","Type":"ContainerStarted","Data":"619b8243fc8c83c762231de5c5e7307228c3fe934c0f89ed666cfa20ae98cfbe"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.719349 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3c4c030f-c3a2-4550-a1aa-d38eb629081b","Type":"ContainerStarted","Data":"e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.744278 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-klrrb" event={"ID":"fbeb890e-90af-4b15-a106-27b03465209f","Type":"ContainerStarted","Data":"7bf43d7c69e676aaa7a0bdf59cd99f7fbe9e093e029d2c1f088c5593e394429d"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.744618 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-klrrb" event={"ID":"fbeb890e-90af-4b15-a106-27b03465209f","Type":"ContainerStarted","Data":"a96221390774fe3988b65ee1910ef06ec4932b4f6683fc6279f4ab48527e63c8"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.746014 4896 generic.go:334] "Generic (PLEG): container finished" podID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerID="4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d" exitCode=0 Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.746037 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerDied","Data":"4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d"} Jan 26 15:36:49 crc kubenswrapper[4896]: I0126 15:36:49.820002 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.819977821 podStartE2EDuration="2.819977821s" podCreationTimestamp="2026-01-26 15:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:49.800339155 +0000 UTC m=+167.582219548" watchObservedRunningTime="2026-01-26 15:36:49.819977821 +0000 UTC m=+167.601858214" Jan 26 15:36:50 crc kubenswrapper[4896]: I0126 15:36:50.137993 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:50 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:50 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:50 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:50 crc kubenswrapper[4896]: I0126 15:36:50.138048 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:50 crc kubenswrapper[4896]: I0126 15:36:50.324927 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7n4vw" Jan 26 15:36:51 crc kubenswrapper[4896]: I0126 15:36:51.140354 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:51 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:51 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:51 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:51 crc kubenswrapper[4896]: I0126 15:36:51.140620 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.001500 4896 generic.go:334] "Generic (PLEG): container finished" podID="9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" containerID="6eebdc24e087539654c25b7edca02bc3dd1688832f9a3613f29b1ddb7ecf68fc" exitCode=0 Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.001568 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8","Type":"ContainerDied","Data":"6eebdc24e087539654c25b7edca02bc3dd1688832f9a3613f29b1ddb7ecf68fc"} Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.005565 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3c4c030f-c3a2-4550-a1aa-d38eb629081b","Type":"ContainerStarted","Data":"360af6fb85a4b6202cfb8b8c571ef5035531ff2c00357b9d5efe2cb760281f89"} Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.033933 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-klrrb" event={"ID":"fbeb890e-90af-4b15-a106-27b03465209f","Type":"ContainerStarted","Data":"36fb649e0c95e9468da940f48bf26ae0a36cb193a15264479a366c242a3bd4ee"} Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.085829 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-klrrb" podStartSLOduration=148.085813607 podStartE2EDuration="2m28.085813607s" podCreationTimestamp="2026-01-26 15:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:52.085314204 +0000 UTC m=+169.867194597" watchObservedRunningTime="2026-01-26 15:36:52.085813607 +0000 UTC m=+169.867694000" Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.104078 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.104056176 podStartE2EDuration="4.104056176s" podCreationTimestamp="2026-01-26 15:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:36:52.099513454 +0000 UTC m=+169.881393847" watchObservedRunningTime="2026-01-26 15:36:52.104056176 +0000 UTC m=+169.885936569" Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.140448 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:52 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:52 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:52 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:52 crc kubenswrapper[4896]: I0126 15:36:52.140501 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.064121 4896 generic.go:334] "Generic (PLEG): container finished" podID="3c4c030f-c3a2-4550-a1aa-d38eb629081b" containerID="360af6fb85a4b6202cfb8b8c571ef5035531ff2c00357b9d5efe2cb760281f89" exitCode=0 Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.064243 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3c4c030f-c3a2-4550-a1aa-d38eb629081b","Type":"ContainerDied","Data":"360af6fb85a4b6202cfb8b8c571ef5035531ff2c00357b9d5efe2cb760281f89"} Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.136986 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:53 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:53 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:53 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.137417 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.475380 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.535009 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access\") pod \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.535065 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir\") pod \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\" (UID: \"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8\") " Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.535241 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" (UID: "9043f9c4-90bb-432d-861d-7c3c6e8fb6b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.535617 4896 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.558791 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" (UID: "9043f9c4-90bb-432d-861d-7c3c6e8fb6b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:53 crc kubenswrapper[4896]: I0126 15:36:53.638042 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9043f9c4-90bb-432d-861d-7c3c6e8fb6b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.076930 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.076945 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9043f9c4-90bb-432d-861d-7c3c6e8fb6b8","Type":"ContainerDied","Data":"619b8243fc8c83c762231de5c5e7307228c3fe934c0f89ed666cfa20ae98cfbe"} Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.077326 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619b8243fc8c83c762231de5c5e7307228c3fe934c0f89ed666cfa20ae98cfbe" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.137064 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:54 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:54 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:54 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.137117 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.426184 4896 patch_prober.go:28] interesting pod/console-f9d7485db-z6479 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.426263 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z6479" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.910229 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.910285 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.910659 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 15:36:54 crc kubenswrapper[4896]: I0126 15:36:54.910677 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 15:36:55 crc kubenswrapper[4896]: I0126 15:36:55.179244 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:55 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:55 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:55 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:55 crc kubenswrapper[4896]: I0126 15:36:55.179514 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:56 crc kubenswrapper[4896]: I0126 15:36:56.138598 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:56 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:56 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:56 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:56 crc kubenswrapper[4896]: I0126 15:36:56.138660 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:57 crc kubenswrapper[4896]: I0126 15:36:57.137443 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:57 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:57 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:57 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:57 crc kubenswrapper[4896]: I0126 15:36:57.137867 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:58 crc kubenswrapper[4896]: I0126 15:36:58.136011 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:58 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:58 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:58 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:58 crc kubenswrapper[4896]: I0126 15:36:58.136086 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:36:59 crc kubenswrapper[4896]: I0126 15:36:59.136740 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 15:36:59 crc kubenswrapper[4896]: [-]has-synced failed: reason withheld Jan 26 15:36:59 crc kubenswrapper[4896]: [+]process-running ok Jan 26 15:36:59 crc kubenswrapper[4896]: healthz check failed Jan 26 15:36:59 crc kubenswrapper[4896]: I0126 15:36:59.136825 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 15:37:00 crc kubenswrapper[4896]: I0126 15:37:00.137216 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:37:00 crc kubenswrapper[4896]: I0126 15:37:00.140849 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ms78m" Jan 26 15:37:01 crc kubenswrapper[4896]: I0126 15:37:01.587908 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.725658 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.870088 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir\") pod \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.870212 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c4c030f-c3a2-4550-a1aa-d38eb629081b" (UID: "3c4c030f-c3a2-4550-a1aa-d38eb629081b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.870397 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access\") pod \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\" (UID: \"3c4c030f-c3a2-4550-a1aa-d38eb629081b\") " Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.870767 4896 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.877732 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c4c030f-c3a2-4550-a1aa-d38eb629081b" (UID: "3c4c030f-c3a2-4550-a1aa-d38eb629081b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:02 crc kubenswrapper[4896]: I0126 15:37:02.971996 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c4c030f-c3a2-4550-a1aa-d38eb629081b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:03 crc kubenswrapper[4896]: I0126 15:37:03.276477 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3c4c030f-c3a2-4550-a1aa-d38eb629081b","Type":"ContainerDied","Data":"e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125"} Jan 26 15:37:03 crc kubenswrapper[4896]: I0126 15:37:03.276522 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5ca76a7259aaeef4455b332af1e373df1dc6d17b4b348747cec3000633d2125" Jan 26 15:37:03 crc kubenswrapper[4896]: I0126 15:37:03.276538 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 15:37:04 crc kubenswrapper[4896]: I0126 15:37:04.431114 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:37:04 crc kubenswrapper[4896]: I0126 15:37:04.434743 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:37:04 crc kubenswrapper[4896]: I0126 15:37:04.930785 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rbmml" Jan 26 15:37:05 crc kubenswrapper[4896]: I0126 15:37:05.936542 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:37:15 crc kubenswrapper[4896]: I0126 15:37:15.182678 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" Jan 26 15:37:18 crc kubenswrapper[4896]: I0126 15:37:18.814081 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:18 crc kubenswrapper[4896]: I0126 15:37:18.814202 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.436051 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.436541 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pcx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4f48w_openshift-marketplace(b52b5ef6-729b-4468-aa67-9a6f645ff27c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.438826 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4f48w" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.496922 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.497246 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfvb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvwsc_openshift-marketplace(b359163c-745c-4adf-97ff-872ee69ae3e5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:21 crc kubenswrapper[4896]: E0126 15:37:21.498451 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gvwsc" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.415543 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:25 crc kubenswrapper[4896]: E0126 15:37:25.416242 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4c030f-c3a2-4550-a1aa-d38eb629081b" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.416259 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4c030f-c3a2-4550-a1aa-d38eb629081b" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: E0126 15:37:25.416277 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.416285 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.416443 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9043f9c4-90bb-432d-861d-7c3c6e8fb6b8" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.416461 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4c030f-c3a2-4550-a1aa-d38eb629081b" containerName="pruner" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.417087 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.421005 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.421557 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.436564 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.520111 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.520307 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.620925 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.621283 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.621118 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.639991 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:25 crc kubenswrapper[4896]: I0126 15:37:25.758934 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:27 crc kubenswrapper[4896]: E0126 15:37:27.887402 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvwsc" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" Jan 26 15:37:27 crc kubenswrapper[4896]: E0126 15:37:27.887688 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4f48w" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" Jan 26 15:37:27 crc kubenswrapper[4896]: E0126 15:37:27.968568 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 15:37:27 crc kubenswrapper[4896]: E0126 15:37:27.968754 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drfb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p5g7v_openshift-marketplace(72a04b75-e07c-4137-b917-928b40745f65): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:27 crc kubenswrapper[4896]: E0126 15:37:27.970362 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-p5g7v" podUID="72a04b75-e07c-4137-b917-928b40745f65" Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.809170 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.811901 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.831878 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.907685 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.907784 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:30 crc kubenswrapper[4896]: I0126 15:37:30.907833 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.008674 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.009009 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.009100 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.009051 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.009193 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.316747 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access\") pod \"installer-9-crc\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:31 crc kubenswrapper[4896]: I0126 15:37:31.492102 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.913858 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-p5g7v" podUID="72a04b75-e07c-4137-b917-928b40745f65" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.985357 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.985507 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fpwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-glz7z_openshift-marketplace(6809e36d-3944-4ee9-885d-d68ad3a99d68): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.986732 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-glz7z" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.991342 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.991651 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nttsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l2v2b_openshift-marketplace(40915453-ba7c-41e6-bc4f-3a221097ae62): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:32 crc kubenswrapper[4896]: E0126 15:37:32.993279 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l2v2b" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" Jan 26 15:37:33 crc kubenswrapper[4896]: E0126 15:37:33.012444 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 15:37:33 crc kubenswrapper[4896]: E0126 15:37:33.012655 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97wcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w5d62_openshift-marketplace(1bb8ed73-cf27-49c5-98cc-79e5a488f604): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:33 crc kubenswrapper[4896]: E0126 15:37:33.013839 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w5d62" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.480754 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w5d62" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.480755 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-glz7z" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.480786 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l2v2b" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.546835 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.547534 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sldms_openshift-marketplace(d3369383-b89e-4cc5-8267-3f849ff0c294): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.548782 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sldms" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.577545 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.577700 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qw4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-md672_openshift-marketplace(3a81db72-eb2c-4c51-8b58-f825e2fbd3bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:37:34 crc kubenswrapper[4896]: E0126 15:37:34.578889 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-md672" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" Jan 26 15:37:34 crc kubenswrapper[4896]: I0126 15:37:34.947533 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 15:37:34 crc kubenswrapper[4896]: I0126 15:37:34.950447 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.521416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59bc256c-1090-4120-8803-f24252f01812","Type":"ContainerStarted","Data":"d955563945fb9f06576edac79b4b1f6237e28d90cb43c16c5d9adf50f6c4a37d"} Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.522007 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59bc256c-1090-4120-8803-f24252f01812","Type":"ContainerStarted","Data":"bb9f26a6def45273904fc46109d2e209fcc07bf9dd1a29703a1b808fcded9feb"} Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.523281 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d2b5d668-df77-4f7f-b45f-ba291b28ce93","Type":"ContainerStarted","Data":"38b9dd07ebb75e49cecc9a70401a02d28b7b34b2884c01064436f111fb3b4751"} Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.523392 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d2b5d668-df77-4f7f-b45f-ba291b28ce93","Type":"ContainerStarted","Data":"0a9f2e56c0961e93f8cd2400b35834d612a14948e82f67509edbd64e78be4c21"} Jan 26 15:37:35 crc kubenswrapper[4896]: E0126 15:37:35.525832 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sldms" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" Jan 26 15:37:35 crc kubenswrapper[4896]: E0126 15:37:35.527549 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-md672" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.607705 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.607686031 podStartE2EDuration="5.607686031s" podCreationTimestamp="2026-01-26 15:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:37:35.568883093 +0000 UTC m=+213.350763536" watchObservedRunningTime="2026-01-26 15:37:35.607686031 +0000 UTC m=+213.389566424" Jan 26 15:37:35 crc kubenswrapper[4896]: I0126 15:37:35.628135 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.628111632 podStartE2EDuration="10.628111632s" podCreationTimestamp="2026-01-26 15:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:37:35.626001584 +0000 UTC m=+213.407881977" watchObservedRunningTime="2026-01-26 15:37:35.628111632 +0000 UTC m=+213.409992025" Jan 26 15:37:36 crc kubenswrapper[4896]: I0126 15:37:36.529803 4896 generic.go:334] "Generic (PLEG): container finished" podID="d2b5d668-df77-4f7f-b45f-ba291b28ce93" containerID="38b9dd07ebb75e49cecc9a70401a02d28b7b34b2884c01064436f111fb3b4751" exitCode=0 Jan 26 15:37:36 crc kubenswrapper[4896]: I0126 15:37:36.529865 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d2b5d668-df77-4f7f-b45f-ba291b28ce93","Type":"ContainerDied","Data":"38b9dd07ebb75e49cecc9a70401a02d28b7b34b2884c01064436f111fb3b4751"} Jan 26 15:37:37 crc kubenswrapper[4896]: I0126 15:37:37.564559 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:37:37 crc kubenswrapper[4896]: I0126 15:37:37.914501 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.004562 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir\") pod \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.004651 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access\") pod \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\" (UID: \"d2b5d668-df77-4f7f-b45f-ba291b28ce93\") " Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.004855 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d2b5d668-df77-4f7f-b45f-ba291b28ce93" (UID: "d2b5d668-df77-4f7f-b45f-ba291b28ce93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.005177 4896 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.011367 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d2b5d668-df77-4f7f-b45f-ba291b28ce93" (UID: "d2b5d668-df77-4f7f-b45f-ba291b28ce93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.106422 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2b5d668-df77-4f7f-b45f-ba291b28ce93-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.543265 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d2b5d668-df77-4f7f-b45f-ba291b28ce93","Type":"ContainerDied","Data":"0a9f2e56c0961e93f8cd2400b35834d612a14948e82f67509edbd64e78be4c21"} Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.543311 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9f2e56c0961e93f8cd2400b35834d612a14948e82f67509edbd64e78be4c21" Jan 26 15:37:38 crc kubenswrapper[4896]: I0126 15:37:38.543363 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 15:37:41 crc kubenswrapper[4896]: I0126 15:37:41.558711 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerStarted","Data":"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984"} Jan 26 15:37:42 crc kubenswrapper[4896]: I0126 15:37:42.565645 4896 generic.go:334] "Generic (PLEG): container finished" podID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerID="6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69" exitCode=0 Jan 26 15:37:42 crc kubenswrapper[4896]: I0126 15:37:42.565719 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerDied","Data":"6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69"} Jan 26 15:37:42 crc kubenswrapper[4896]: I0126 15:37:42.570822 4896 generic.go:334] "Generic (PLEG): container finished" podID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerID="43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984" exitCode=0 Jan 26 15:37:42 crc kubenswrapper[4896]: I0126 15:37:42.570864 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerDied","Data":"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984"} Jan 26 15:37:43 crc kubenswrapper[4896]: I0126 15:37:43.580003 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerStarted","Data":"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b"} Jan 26 15:37:43 crc kubenswrapper[4896]: I0126 15:37:43.584813 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerStarted","Data":"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3"} Jan 26 15:37:43 crc kubenswrapper[4896]: I0126 15:37:43.634481 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvwsc" podStartSLOduration=2.013327048 podStartE2EDuration="57.63446196s" podCreationTimestamp="2026-01-26 15:36:46 +0000 UTC" firstStartedPulling="2026-01-26 15:36:47.478219121 +0000 UTC m=+165.260099514" lastFinishedPulling="2026-01-26 15:37:43.099354033 +0000 UTC m=+220.881234426" observedRunningTime="2026-01-26 15:37:43.608128485 +0000 UTC m=+221.390008898" watchObservedRunningTime="2026-01-26 15:37:43.63446196 +0000 UTC m=+221.416342353" Jan 26 15:37:43 crc kubenswrapper[4896]: I0126 15:37:43.634997 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4f48w" podStartSLOduration=3.080894602 podStartE2EDuration="58.634989224s" podCreationTimestamp="2026-01-26 15:36:45 +0000 UTC" firstStartedPulling="2026-01-26 15:36:47.481442737 +0000 UTC m=+165.263323130" lastFinishedPulling="2026-01-26 15:37:43.035537359 +0000 UTC m=+220.817417752" observedRunningTime="2026-01-26 15:37:43.627188409 +0000 UTC m=+221.409068812" watchObservedRunningTime="2026-01-26 15:37:43.634989224 +0000 UTC m=+221.416869617" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.274740 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.275321 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.343014 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.723928 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.723997 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:46 crc kubenswrapper[4896]: I0126 15:37:46.769606 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:48 crc kubenswrapper[4896]: I0126 15:37:48.813835 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:48 crc kubenswrapper[4896]: I0126 15:37:48.814223 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:48 crc kubenswrapper[4896]: I0126 15:37:48.814280 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:37:48 crc kubenswrapper[4896]: I0126 15:37:48.814860 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:37:48 crc kubenswrapper[4896]: I0126 15:37:48.814949 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939" gracePeriod=600 Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.635549 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerStarted","Data":"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997"} Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.637886 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerStarted","Data":"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d"} Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.640890 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939" exitCode=0 Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.640967 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939"} Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.640994 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236"} Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.642870 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerStarted","Data":"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d"} Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.645218 4896 generic.go:334] "Generic (PLEG): container finished" podID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerID="1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609" exitCode=0 Jan 26 15:37:49 crc kubenswrapper[4896]: I0126 15:37:49.645246 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerDied","Data":"1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609"} Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.651497 4896 generic.go:334] "Generic (PLEG): container finished" podID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerID="3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.651571 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerDied","Data":"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997"} Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.654743 4896 generic.go:334] "Generic (PLEG): container finished" podID="72a04b75-e07c-4137-b917-928b40745f65" containerID="d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.654793 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerDied","Data":"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d"} Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.660720 4896 generic.go:334] "Generic (PLEG): container finished" podID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerID="fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d" exitCode=0 Jan 26 15:37:50 crc kubenswrapper[4896]: I0126 15:37:50.660771 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerDied","Data":"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.728507 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerStarted","Data":"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.731677 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerStarted","Data":"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.733918 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerStarted","Data":"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.736197 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerStarted","Data":"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.738348 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerStarted","Data":"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.747563 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerStarted","Data":"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8"} Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.782900 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w5d62" podStartSLOduration=3.402089143 podStartE2EDuration="1m7.782881172s" podCreationTimestamp="2026-01-26 15:36:44 +0000 UTC" firstStartedPulling="2026-01-26 15:36:46.432391088 +0000 UTC m=+164.214271471" lastFinishedPulling="2026-01-26 15:37:50.813183107 +0000 UTC m=+228.595063500" observedRunningTime="2026-01-26 15:37:51.779475979 +0000 UTC m=+229.561356392" watchObservedRunningTime="2026-01-26 15:37:51.782881172 +0000 UTC m=+229.564761565" Jan 26 15:37:51 crc kubenswrapper[4896]: I0126 15:37:51.821361 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2v2b" podStartSLOduration=3.457130391 podStartE2EDuration="1m4.82134507s" podCreationTimestamp="2026-01-26 15:36:47 +0000 UTC" firstStartedPulling="2026-01-26 15:36:49.709126413 +0000 UTC m=+167.491006806" lastFinishedPulling="2026-01-26 15:37:51.073341102 +0000 UTC m=+228.855221485" observedRunningTime="2026-01-26 15:37:51.819001795 +0000 UTC m=+229.600882208" watchObservedRunningTime="2026-01-26 15:37:51.82134507 +0000 UTC m=+229.603225463" Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.009403 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5g7v" podStartSLOduration=3.251071595 podStartE2EDuration="1m8.009386131s" podCreationTimestamp="2026-01-26 15:36:44 +0000 UTC" firstStartedPulling="2026-01-26 15:36:46.386999444 +0000 UTC m=+164.168879837" lastFinishedPulling="2026-01-26 15:37:51.14531394 +0000 UTC m=+228.927194373" observedRunningTime="2026-01-26 15:37:51.949107193 +0000 UTC m=+229.730987596" watchObservedRunningTime="2026-01-26 15:37:52.009386131 +0000 UTC m=+229.791266534" Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.068129 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sldms" podStartSLOduration=3.362655698 podStartE2EDuration="1m8.068110165s" podCreationTimestamp="2026-01-26 15:36:44 +0000 UTC" firstStartedPulling="2026-01-26 15:36:46.459190035 +0000 UTC m=+164.241070428" lastFinishedPulling="2026-01-26 15:37:51.164644502 +0000 UTC m=+228.946524895" observedRunningTime="2026-01-26 15:37:52.063318404 +0000 UTC m=+229.845198787" watchObservedRunningTime="2026-01-26 15:37:52.068110165 +0000 UTC m=+229.849990558" Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.756673 4896 generic.go:334] "Generic (PLEG): container finished" podID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerID="6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec" exitCode=0 Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.756755 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerDied","Data":"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec"} Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.760173 4896 generic.go:334] "Generic (PLEG): container finished" podID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerID="88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7" exitCode=0 Jan 26 15:37:52 crc kubenswrapper[4896]: I0126 15:37:52.771016 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerDied","Data":"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7"} Jan 26 15:37:53 crc kubenswrapper[4896]: I0126 15:37:53.905611 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerStarted","Data":"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93"} Jan 26 15:37:53 crc kubenswrapper[4896]: I0126 15:37:53.909297 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerStarted","Data":"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2"} Jan 26 15:37:53 crc kubenswrapper[4896]: I0126 15:37:53.965521 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-md672" podStartSLOduration=3.107869186 podStartE2EDuration="1m9.965500273s" podCreationTimestamp="2026-01-26 15:36:44 +0000 UTC" firstStartedPulling="2026-01-26 15:36:46.442321684 +0000 UTC m=+164.224202077" lastFinishedPulling="2026-01-26 15:37:53.299952771 +0000 UTC m=+231.081833164" observedRunningTime="2026-01-26 15:37:53.935744235 +0000 UTC m=+231.717624618" watchObservedRunningTime="2026-01-26 15:37:53.965500273 +0000 UTC m=+231.747380666" Jan 26 15:37:54 crc kubenswrapper[4896]: I0126 15:37:54.880615 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:37:54 crc kubenswrapper[4896]: I0126 15:37:54.881006 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:37:54 crc kubenswrapper[4896]: I0126 15:37:54.886164 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:37:54 crc kubenswrapper[4896]: I0126 15:37:54.886207 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.078869 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.084355 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.109339 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-glz7z" podStartSLOduration=4.67985907 podStartE2EDuration="1m8.109317439s" podCreationTimestamp="2026-01-26 15:36:47 +0000 UTC" firstStartedPulling="2026-01-26 15:36:49.748939709 +0000 UTC m=+167.530820092" lastFinishedPulling="2026-01-26 15:37:53.178398058 +0000 UTC m=+230.960278461" observedRunningTime="2026-01-26 15:37:53.968040683 +0000 UTC m=+231.749921076" watchObservedRunningTime="2026-01-26 15:37:55.109317439 +0000 UTC m=+232.891197832" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.119061 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.119099 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.120756 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.120793 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.140991 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:37:55 crc kubenswrapper[4896]: I0126 15:37:55.167350 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:37:56 crc kubenswrapper[4896]: I0126 15:37:56.164472 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-md672" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:56 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:56 crc kubenswrapper[4896]: > Jan 26 15:37:56 crc kubenswrapper[4896]: I0126 15:37:56.312403 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:37:56 crc kubenswrapper[4896]: I0126 15:37:56.792705 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:57 crc kubenswrapper[4896]: I0126 15:37:57.605389 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:37:57 crc kubenswrapper[4896]: I0126 15:37:57.605462 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:37:57 crc kubenswrapper[4896]: I0126 15:37:57.969820 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:37:57 crc kubenswrapper[4896]: I0126 15:37:57.969866 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:37:58 crc kubenswrapper[4896]: I0126 15:37:58.642815 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l2v2b" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:58 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:58 crc kubenswrapper[4896]: > Jan 26 15:37:58 crc kubenswrapper[4896]: I0126 15:37:58.989975 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:37:58 crc kubenswrapper[4896]: I0126 15:37:58.990238 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvwsc" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="registry-server" containerID="cri-o://e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b" gracePeriod=2 Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.009670 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-glz7z" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="registry-server" probeResult="failure" output=< Jan 26 15:37:59 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:37:59 crc kubenswrapper[4896]: > Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.365031 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.385999 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities\") pod \"b359163c-745c-4adf-97ff-872ee69ae3e5\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.386055 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfvb7\" (UniqueName: \"kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7\") pod \"b359163c-745c-4adf-97ff-872ee69ae3e5\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.386181 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content\") pod \"b359163c-745c-4adf-97ff-872ee69ae3e5\" (UID: \"b359163c-745c-4adf-97ff-872ee69ae3e5\") " Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.387564 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities" (OuterVolumeSpecName: "utilities") pod "b359163c-745c-4adf-97ff-872ee69ae3e5" (UID: "b359163c-745c-4adf-97ff-872ee69ae3e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.392805 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7" (OuterVolumeSpecName: "kube-api-access-wfvb7") pod "b359163c-745c-4adf-97ff-872ee69ae3e5" (UID: "b359163c-745c-4adf-97ff-872ee69ae3e5"). InnerVolumeSpecName "kube-api-access-wfvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.413329 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b359163c-745c-4adf-97ff-872ee69ae3e5" (UID: "b359163c-745c-4adf-97ff-872ee69ae3e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.487569 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.487622 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b359163c-745c-4adf-97ff-872ee69ae3e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.487634 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfvb7\" (UniqueName: \"kubernetes.io/projected/b359163c-745c-4adf-97ff-872ee69ae3e5-kube-api-access-wfvb7\") on node \"crc\" DevicePath \"\"" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.957376 4896 generic.go:334] "Generic (PLEG): container finished" podID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerID="e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b" exitCode=0 Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.957424 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerDied","Data":"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b"} Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.957458 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvwsc" event={"ID":"b359163c-745c-4adf-97ff-872ee69ae3e5","Type":"ContainerDied","Data":"82353a8e5f8facfd94be5da72f948ac7ccdfb44a613fd10f74ae6c7cf4ad3cea"} Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.957476 4896 scope.go:117] "RemoveContainer" containerID="e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.957639 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvwsc" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.988363 4896 scope.go:117] "RemoveContainer" containerID="6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69" Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.992487 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:37:59 crc kubenswrapper[4896]: I0126 15:37:59.995215 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvwsc"] Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.011420 4896 scope.go:117] "RemoveContainer" containerID="c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.027282 4896 scope.go:117] "RemoveContainer" containerID="e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b" Jan 26 15:38:00 crc kubenswrapper[4896]: E0126 15:38:00.028173 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b\": container with ID starting with e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b not found: ID does not exist" containerID="e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.028203 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b"} err="failed to get container status \"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b\": rpc error: code = NotFound desc = could not find container \"e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b\": container with ID starting with e4470bf4a48ab1fd11d15cc53341aabf9a00a2860b37e4fb191cd77a8003ee6b not found: ID does not exist" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.028232 4896 scope.go:117] "RemoveContainer" containerID="6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69" Jan 26 15:38:00 crc kubenswrapper[4896]: E0126 15:38:00.028592 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69\": container with ID starting with 6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69 not found: ID does not exist" containerID="6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.028629 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69"} err="failed to get container status \"6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69\": rpc error: code = NotFound desc = could not find container \"6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69\": container with ID starting with 6278e1bc8bc75614f9ffc2e5042903d11f8a44caf27faa347c759f2008eaca69 not found: ID does not exist" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.028657 4896 scope.go:117] "RemoveContainer" containerID="c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7" Jan 26 15:38:00 crc kubenswrapper[4896]: E0126 15:38:00.029007 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7\": container with ID starting with c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7 not found: ID does not exist" containerID="c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.029034 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7"} err="failed to get container status \"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7\": rpc error: code = NotFound desc = could not find container \"c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7\": container with ID starting with c646f7432213b3cbf0210ff2d78fa56eb9799f81d4adefbda41953f074080fa7 not found: ID does not exist" Jan 26 15:38:00 crc kubenswrapper[4896]: I0126 15:38:00.771613 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" path="/var/lib/kubelet/pods/b359163c-745c-4adf-97ff-872ee69ae3e5/volumes" Jan 26 15:38:02 crc kubenswrapper[4896]: I0126 15:38:02.609667 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerName="oauth-openshift" containerID="cri-o://a95a08ce5e021667b3400a5449a6db4966235fa79dfc0a43f1cc8c96b3d6a4f7" gracePeriod=15 Jan 26 15:38:03 crc kubenswrapper[4896]: I0126 15:38:03.988131 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerID="a95a08ce5e021667b3400a5449a6db4966235fa79dfc0a43f1cc8c96b3d6a4f7" exitCode=0 Jan 26 15:38:03 crc kubenswrapper[4896]: I0126 15:38:03.988856 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" event={"ID":"e1cbe94d-b2c9-4632-8a2b-1066967ed241","Type":"ContainerDied","Data":"a95a08ce5e021667b3400a5449a6db4966235fa79dfc0a43f1cc8c96b3d6a4f7"} Jan 26 15:38:04 crc kubenswrapper[4896]: I0126 15:38:04.935678 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.033737 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068298 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv"] Jan 26 15:38:05 crc kubenswrapper[4896]: E0126 15:38:05.068545 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="registry-server" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068557 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="registry-server" Jan 26 15:38:05 crc kubenswrapper[4896]: E0126 15:38:05.068592 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="extract-utilities" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068598 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="extract-utilities" Jan 26 15:38:05 crc kubenswrapper[4896]: E0126 15:38:05.068607 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerName="oauth-openshift" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068613 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerName="oauth-openshift" Jan 26 15:38:05 crc kubenswrapper[4896]: E0126 15:38:05.068624 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="extract-content" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068630 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="extract-content" Jan 26 15:38:05 crc kubenswrapper[4896]: E0126 15:38:05.068640 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b5d668-df77-4f7f-b45f-ba291b28ce93" containerName="pruner" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068646 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b5d668-df77-4f7f-b45f-ba291b28ce93" containerName="pruner" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068734 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" containerName="oauth-openshift" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068748 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b359163c-745c-4adf-97ff-872ee69ae3e5" containerName="registry-server" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.068758 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b5d668-df77-4f7f-b45f-ba291b28ce93" containerName="pruner" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.069142 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.089166 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv"] Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.160179 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163436 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163507 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163531 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163556 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163639 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163667 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163690 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163712 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163750 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163781 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163835 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163872 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163891 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xnn\" (UniqueName: \"kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.163912 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template\") pod \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\" (UID: \"e1cbe94d-b2c9-4632-8a2b-1066967ed241\") " Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.166173 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.167132 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.167167 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.167217 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.167376 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.170037 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.170688 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.171409 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn" (OuterVolumeSpecName: "kube-api-access-v7xnn") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "kube-api-access-v7xnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.171443 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.174202 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.174270 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.174680 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.176544 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.177650 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.177833 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e1cbe94d-b2c9-4632-8a2b-1066967ed241" (UID: "e1cbe94d-b2c9-4632-8a2b-1066967ed241"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.228239 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.265973 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266022 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266045 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-dir\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266084 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwtj\" (UniqueName: \"kubernetes.io/projected/855da462-519a-4fe1-b51b-ae4e1adfdb62-kube-api-access-qtwtj\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266115 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266180 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-policies\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266208 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266231 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266263 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266285 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266341 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266411 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266445 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266469 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266552 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266568 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266647 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7xnn\" (UniqueName: \"kubernetes.io/projected/e1cbe94d-b2c9-4632-8a2b-1066967ed241-kube-api-access-v7xnn\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266677 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266691 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266703 4896 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266714 4896 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e1cbe94d-b2c9-4632-8a2b-1066967ed241-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266725 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266751 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266763 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266773 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266783 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266793 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.266804 4896 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e1cbe94d-b2c9-4632-8a2b-1066967ed241-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367285 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367339 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367360 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367387 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367453 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-dir\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367482 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwtj\" (UniqueName: \"kubernetes.io/projected/855da462-519a-4fe1-b51b-ae4e1adfdb62-kube-api-access-qtwtj\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367513 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-policies\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367564 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367661 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367697 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367600 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-dir\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.367719 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.368010 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.368488 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.368959 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.370129 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-audit-policies\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.370657 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.371402 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.373290 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.374245 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.374305 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.374987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.375280 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.375900 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.377318 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/855da462-519a-4fe1-b51b-ae4e1adfdb62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.385750 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwtj\" (UniqueName: \"kubernetes.io/projected/855da462-519a-4fe1-b51b-ae4e1adfdb62-kube-api-access-qtwtj\") pod \"oauth-openshift-6d4bd77db6-j8xrv\" (UID: \"855da462-519a-4fe1-b51b-ae4e1adfdb62\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.395275 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:05 crc kubenswrapper[4896]: I0126 15:38:05.814444 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv"] Jan 26 15:38:05 crc kubenswrapper[4896]: W0126 15:38:05.822363 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod855da462_519a_4fe1_b51b_ae4e1adfdb62.slice/crio-b99bfeb1b28ec59006efec0e0dfaaa75301744af8bd40783a87a9966c997f295 WatchSource:0}: Error finding container b99bfeb1b28ec59006efec0e0dfaaa75301744af8bd40783a87a9966c997f295: Status 404 returned error can't find the container with id b99bfeb1b28ec59006efec0e0dfaaa75301744af8bd40783a87a9966c997f295 Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:05.999724 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.000315 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5g7v" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="registry-server" containerID="cri-o://6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0" gracePeriod=2 Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.005909 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.005908 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-k45bj" event={"ID":"e1cbe94d-b2c9-4632-8a2b-1066967ed241","Type":"ContainerDied","Data":"f5d368244ffab2bba7435cd270edc83f51340eb7e9056d8bec0bc9f4ac70272b"} Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.005977 4896 scope.go:117] "RemoveContainer" containerID="a95a08ce5e021667b3400a5449a6db4966235fa79dfc0a43f1cc8c96b3d6a4f7" Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.010138 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" event={"ID":"855da462-519a-4fe1-b51b-ae4e1adfdb62","Type":"ContainerStarted","Data":"b99bfeb1b28ec59006efec0e0dfaaa75301744af8bd40783a87a9966c997f295"} Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.066284 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.069750 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-k45bj"] Jan 26 15:38:06 crc kubenswrapper[4896]: I0126 15:38:06.765792 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1cbe94d-b2c9-4632-8a2b-1066967ed241" path="/var/lib/kubelet/pods/e1cbe94d-b2c9-4632-8a2b-1066967ed241/volumes" Jan 26 15:38:07 crc kubenswrapper[4896]: I0126 15:38:07.649241 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:38:07 crc kubenswrapper[4896]: I0126 15:38:07.708212 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.045621 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.101538 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.387655 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.387891 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-md672" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="registry-server" containerID="cri-o://d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2" gracePeriod=2 Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.768834 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.911414 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qw4t\" (UniqueName: \"kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t\") pod \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.911610 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities\") pod \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.911694 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content\") pod \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\" (UID: \"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb\") " Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.915814 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities" (OuterVolumeSpecName: "utilities") pod "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" (UID: "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.924812 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t" (OuterVolumeSpecName: "kube-api-access-7qw4t") pod "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" (UID: "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb"). InnerVolumeSpecName "kube-api-access-7qw4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:38:08 crc kubenswrapper[4896]: I0126 15:38:08.961084 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" (UID: "3a81db72-eb2c-4c51-8b58-f825e2fbd3bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.013101 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qw4t\" (UniqueName: \"kubernetes.io/projected/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-kube-api-access-7qw4t\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.013446 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.013457 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.028131 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.032438 4896 generic.go:334] "Generic (PLEG): container finished" podID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerID="d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2" exitCode=0 Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.032494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerDied","Data":"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2"} Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.032521 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md672" event={"ID":"3a81db72-eb2c-4c51-8b58-f825e2fbd3bb","Type":"ContainerDied","Data":"61f9fa0580c4444b9e17e0d7bf46494fe85762a1883113d27a974705eb2a2a52"} Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.032539 4896 scope.go:117] "RemoveContainer" containerID="d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.032642 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md672" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.037571 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" event={"ID":"855da462-519a-4fe1-b51b-ae4e1adfdb62","Type":"ContainerStarted","Data":"74d8d8d3f589a44c9b9d9ceb3e451e867fd2102a10d82dd12feba1c4f43fd84d"} Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.038489 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.051859 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.057961 4896 generic.go:334] "Generic (PLEG): container finished" podID="72a04b75-e07c-4137-b917-928b40745f65" containerID="6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0" exitCode=0 Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.058024 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerDied","Data":"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0"} Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.058100 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5g7v" event={"ID":"72a04b75-e07c-4137-b917-928b40745f65","Type":"ContainerDied","Data":"dcfa4e3bfdc0618b1738994efe0127bc52d546f86a07b9579997fe4fa073f936"} Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.058103 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5g7v" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.066228 4896 scope.go:117] "RemoveContainer" containerID="88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.083020 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" podStartSLOduration=32.083004127 podStartE2EDuration="32.083004127s" podCreationTimestamp="2026-01-26 15:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:38:09.080039535 +0000 UTC m=+246.861919938" watchObservedRunningTime="2026-01-26 15:38:09.083004127 +0000 UTC m=+246.864884520" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.108897 4896 scope.go:117] "RemoveContainer" containerID="84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.115315 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities\") pod \"72a04b75-e07c-4137-b917-928b40745f65\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.115429 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content\") pod \"72a04b75-e07c-4137-b917-928b40745f65\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.119055 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities" (OuterVolumeSpecName: "utilities") pod "72a04b75-e07c-4137-b917-928b40745f65" (UID: "72a04b75-e07c-4137-b917-928b40745f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.123141 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.125849 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-md672"] Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.136767 4896 scope.go:117] "RemoveContainer" containerID="d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.138061 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2\": container with ID starting with d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2 not found: ID does not exist" containerID="d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138134 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2"} err="failed to get container status \"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2\": rpc error: code = NotFound desc = could not find container \"d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2\": container with ID starting with d31fd6bf8852d82e81de2c8559cc0c9b8d512d004de5fb7ad2e21ddf5f1f1bd2 not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138183 4896 scope.go:117] "RemoveContainer" containerID="88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.138663 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7\": container with ID starting with 88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7 not found: ID does not exist" containerID="88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138700 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7"} err="failed to get container status \"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7\": rpc error: code = NotFound desc = could not find container \"88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7\": container with ID starting with 88d1bf8909370aec4146150c4d49f61067a13432729da5e67c9fd82ba8cee1f7 not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138727 4896 scope.go:117] "RemoveContainer" containerID="84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.138936 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652\": container with ID starting with 84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652 not found: ID does not exist" containerID="84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138964 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652"} err="failed to get container status \"84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652\": rpc error: code = NotFound desc = could not find container \"84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652\": container with ID starting with 84eae55d46710181f3bea88b053cf78f886a75c1e90c42b47af5e1b2c844a652 not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.138980 4896 scope.go:117] "RemoveContainer" containerID="6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.154125 4896 scope.go:117] "RemoveContainer" containerID="d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.173816 4896 scope.go:117] "RemoveContainer" containerID="d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.188107 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72a04b75-e07c-4137-b917-928b40745f65" (UID: "72a04b75-e07c-4137-b917-928b40745f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.191544 4896 scope.go:117] "RemoveContainer" containerID="6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.191985 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0\": container with ID starting with 6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0 not found: ID does not exist" containerID="6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.192012 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0"} err="failed to get container status \"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0\": rpc error: code = NotFound desc = could not find container \"6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0\": container with ID starting with 6c6dd1f092991a90766b9d9a0b58d28148ce6a7e18f0586d80e8b03301977cb0 not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.192034 4896 scope.go:117] "RemoveContainer" containerID="d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.192446 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d\": container with ID starting with d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d not found: ID does not exist" containerID="d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.192473 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d"} err="failed to get container status \"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d\": rpc error: code = NotFound desc = could not find container \"d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d\": container with ID starting with d186cca641dec90bfc9a21d2d7ee572012c751a812e345bfe304a15cabcf412d not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.192493 4896 scope.go:117] "RemoveContainer" containerID="d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482" Jan 26 15:38:09 crc kubenswrapper[4896]: E0126 15:38:09.192789 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482\": container with ID starting with d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482 not found: ID does not exist" containerID="d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.192810 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482"} err="failed to get container status \"d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482\": rpc error: code = NotFound desc = could not find container \"d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482\": container with ID starting with d0968ce8db63195691a7954e7eea89e984e29e9c4bd516a663bf66955d294482 not found: ID does not exist" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.221015 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drfb5\" (UniqueName: \"kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5\") pod \"72a04b75-e07c-4137-b917-928b40745f65\" (UID: \"72a04b75-e07c-4137-b917-928b40745f65\") " Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.221447 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.221492 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a04b75-e07c-4137-b917-928b40745f65-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.224416 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5" (OuterVolumeSpecName: "kube-api-access-drfb5") pod "72a04b75-e07c-4137-b917-928b40745f65" (UID: "72a04b75-e07c-4137-b917-928b40745f65"). InnerVolumeSpecName "kube-api-access-drfb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.322955 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drfb5\" (UniqueName: \"kubernetes.io/projected/72a04b75-e07c-4137-b917-928b40745f65-kube-api-access-drfb5\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.384544 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:38:09 crc kubenswrapper[4896]: I0126 15:38:09.387540 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5g7v"] Jan 26 15:38:10 crc kubenswrapper[4896]: I0126 15:38:10.773963 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" path="/var/lib/kubelet/pods/3a81db72-eb2c-4c51-8b58-f825e2fbd3bb/volumes" Jan 26 15:38:10 crc kubenswrapper[4896]: I0126 15:38:10.774767 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a04b75-e07c-4137-b917-928b40745f65" path="/var/lib/kubelet/pods/72a04b75-e07c-4137-b917-928b40745f65/volumes" Jan 26 15:38:10 crc kubenswrapper[4896]: I0126 15:38:10.987638 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:38:10 crc kubenswrapper[4896]: I0126 15:38:10.987874 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-glz7z" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="registry-server" containerID="cri-o://2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93" gracePeriod=2 Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.941284 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.962008 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities\") pod \"6809e36d-3944-4ee9-885d-d68ad3a99d68\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.962173 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content\") pod \"6809e36d-3944-4ee9-885d-d68ad3a99d68\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.962242 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fpwb\" (UniqueName: \"kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb\") pod \"6809e36d-3944-4ee9-885d-d68ad3a99d68\" (UID: \"6809e36d-3944-4ee9-885d-d68ad3a99d68\") " Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.962987 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities" (OuterVolumeSpecName: "utilities") pod "6809e36d-3944-4ee9-885d-d68ad3a99d68" (UID: "6809e36d-3944-4ee9-885d-d68ad3a99d68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:11 crc kubenswrapper[4896]: I0126 15:38:11.969007 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb" (OuterVolumeSpecName: "kube-api-access-7fpwb") pod "6809e36d-3944-4ee9-885d-d68ad3a99d68" (UID: "6809e36d-3944-4ee9-885d-d68ad3a99d68"). InnerVolumeSpecName "kube-api-access-7fpwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.064435 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fpwb\" (UniqueName: \"kubernetes.io/projected/6809e36d-3944-4ee9-885d-d68ad3a99d68-kube-api-access-7fpwb\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.064474 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.082891 4896 generic.go:334] "Generic (PLEG): container finished" podID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerID="2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93" exitCode=0 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.082940 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerDied","Data":"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93"} Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.082970 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glz7z" event={"ID":"6809e36d-3944-4ee9-885d-d68ad3a99d68","Type":"ContainerDied","Data":"50f882b4540e4cf43dc0cb405da6ad4a6309ca715a836dd6a143bbdf7963c698"} Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.082992 4896 scope.go:117] "RemoveContainer" containerID="2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.083099 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glz7z" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.092687 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6809e36d-3944-4ee9-885d-d68ad3a99d68" (UID: "6809e36d-3944-4ee9-885d-d68ad3a99d68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.100762 4896 scope.go:117] "RemoveContainer" containerID="6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.125799 4896 scope.go:117] "RemoveContainer" containerID="4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.143718 4896 scope.go:117] "RemoveContainer" containerID="2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.144218 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93\": container with ID starting with 2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93 not found: ID does not exist" containerID="2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.144259 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93"} err="failed to get container status \"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93\": rpc error: code = NotFound desc = could not find container \"2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93\": container with ID starting with 2f86e4c22c82c716da4dad568062262aacdef89d2b0e3268bb02d2fb97fc7a93 not found: ID does not exist" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.144312 4896 scope.go:117] "RemoveContainer" containerID="6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.144665 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec\": container with ID starting with 6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec not found: ID does not exist" containerID="6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.144692 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec"} err="failed to get container status \"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec\": rpc error: code = NotFound desc = could not find container \"6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec\": container with ID starting with 6c528ccf5026eb19ca48a411fc08763d5f46eead488c2574541ce1e3eb58f5ec not found: ID does not exist" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.144708 4896 scope.go:117] "RemoveContainer" containerID="4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.144947 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d\": container with ID starting with 4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d not found: ID does not exist" containerID="4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.144969 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d"} err="failed to get container status \"4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d\": rpc error: code = NotFound desc = could not find container \"4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d\": container with ID starting with 4e0e77f4ec79c2fea2a2a3c4e46d6dba97b6b47bfa6bc3cf0f82df8310bd055d not found: ID does not exist" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.165500 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6809e36d-3944-4ee9-885d-d68ad3a99d68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.424616 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.431931 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-glz7z"] Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.767977 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" path="/var/lib/kubelet/pods/6809e36d-3944-4ee9-885d-d68ad3a99d68/volumes" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802241 4896 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802734 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802753 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802761 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802767 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802782 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802787 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802805 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802811 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802822 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802828 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802836 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802843 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802851 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802858 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="extract-content" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802866 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802874 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="extract-utilities" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.802884 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802894 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.802994 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a81db72-eb2c-4c51-8b58-f825e2fbd3bb" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.803007 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a04b75-e07c-4137-b917-928b40745f65" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.803021 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6809e36d-3944-4ee9-885d-d68ad3a99d68" containerName="registry-server" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.803457 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.803986 4896 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.804928 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64" gracePeriod=15 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.804969 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21" gracePeriod=15 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.805166 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a" gracePeriod=15 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.805180 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79" gracePeriod=15 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.805224 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a" gracePeriod=15 Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.806552 4896 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.806962 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.806993 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.807011 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807023 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.807038 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807051 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.807073 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807085 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.807102 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807114 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 15:38:12 crc kubenswrapper[4896]: E0126 15:38:12.807149 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807267 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807481 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807503 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807516 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807547 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.807563 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873120 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873170 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873219 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873248 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873268 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873297 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873315 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.873330 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.974918 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975195 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975238 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975304 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975257 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975374 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975449 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975507 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975556 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975506 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975590 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975624 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975661 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975730 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:12 crc kubenswrapper[4896]: I0126 15:38:12.975803 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.093805 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.094663 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79" exitCode=0 Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.094694 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21" exitCode=0 Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.094702 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a" exitCode=0 Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.094712 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a" exitCode=2 Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.098138 4896 generic.go:334] "Generic (PLEG): container finished" podID="59bc256c-1090-4120-8803-f24252f01812" containerID="d955563945fb9f06576edac79b4b1f6237e28d90cb43c16c5d9adf50f6c4a37d" exitCode=0 Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.098178 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59bc256c-1090-4120-8803-f24252f01812","Type":"ContainerDied","Data":"d955563945fb9f06576edac79b4b1f6237e28d90cb43c16c5d9adf50f6c4a37d"} Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.099043 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.100138 4896 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.382629 4896 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.383054 4896 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.383463 4896 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.383784 4896 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.384054 4896 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: I0126 15:38:13.384091 4896 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.384369 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.586127 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.962198 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:700a5f979fa4ef2b6f03177e68a780c3d93e2a6f429cdaa50e43997cf400e60c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ffe83dddbe52f5e67e1462a3d99eed5cbcb1385f1a99af0cb768e4968931dc8c\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203425009},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:baf4eb931aab99ddd36e09d79f76ea1128c2ef536e95b78edb9af73175db2be3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:dfb030ab67faacd3572a0cae805bd05f041ba6a589cf6fb289cb2295f364c580\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1183907051},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:169566a3a0bc4f9ca64256bb682df6ad4e2cfc5740b5338370c8202d43621680\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:5e18cee5ade3fc0cec09a5ee469d5840c7f50ec0cda6b90150394ad661ac5380\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.963349 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.963790 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.964103 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.964480 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.964506 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:38:13 crc kubenswrapper[4896]: E0126 15:38:13.988601 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.358283 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.358764 4896 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.358972 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395388 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access\") pod \"59bc256c-1090-4120-8803-f24252f01812\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395493 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock\") pod \"59bc256c-1090-4120-8803-f24252f01812\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395514 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir\") pod \"59bc256c-1090-4120-8803-f24252f01812\" (UID: \"59bc256c-1090-4120-8803-f24252f01812\") " Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395653 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock" (OuterVolumeSpecName: "var-lock") pod "59bc256c-1090-4120-8803-f24252f01812" (UID: "59bc256c-1090-4120-8803-f24252f01812"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395843 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "59bc256c-1090-4120-8803-f24252f01812" (UID: "59bc256c-1090-4120-8803-f24252f01812"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.395928 4896 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.399827 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "59bc256c-1090-4120-8803-f24252f01812" (UID: "59bc256c-1090-4120-8803-f24252f01812"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.497940 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59bc256c-1090-4120-8803-f24252f01812-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:14 crc kubenswrapper[4896]: I0126 15:38:14.498015 4896 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59bc256c-1090-4120-8803-f24252f01812-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:14 crc kubenswrapper[4896]: E0126 15:38:14.790372 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.111443 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"59bc256c-1090-4120-8803-f24252f01812","Type":"ContainerDied","Data":"bb9f26a6def45273904fc46109d2e209fcc07bf9dd1a29703a1b808fcded9feb"} Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.111806 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb9f26a6def45273904fc46109d2e209fcc07bf9dd1a29703a1b808fcded9feb" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.111517 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.118694 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.395072 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.396807 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.397410 4896 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.397877 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.511855 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.512526 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.512619 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.512666 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.511949 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.512800 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.519738 4896 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.519786 4896 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:15 crc kubenswrapper[4896]: I0126 15:38:15.519803 4896 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.120479 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.122126 4896 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64" exitCode=0 Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.122207 4896 scope.go:117] "RemoveContainer" containerID="5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.122405 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.139783 4896 scope.go:117] "RemoveContainer" containerID="7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.143202 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.143966 4896 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.162539 4896 scope.go:117] "RemoveContainer" containerID="c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.182870 4896 scope.go:117] "RemoveContainer" containerID="ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.199651 4896 scope.go:117] "RemoveContainer" containerID="abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.214472 4896 scope.go:117] "RemoveContainer" containerID="2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.238315 4896 scope.go:117] "RemoveContainer" containerID="5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.239019 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\": container with ID starting with 5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79 not found: ID does not exist" containerID="5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.239062 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79"} err="failed to get container status \"5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\": rpc error: code = NotFound desc = could not find container \"5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79\": container with ID starting with 5c28ecb69dfb4a7da72257b020cde61892216e13078c25d42ba9e35e2bb09c79 not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.239092 4896 scope.go:117] "RemoveContainer" containerID="7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.239405 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\": container with ID starting with 7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21 not found: ID does not exist" containerID="7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.239430 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21"} err="failed to get container status \"7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\": rpc error: code = NotFound desc = could not find container \"7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21\": container with ID starting with 7d74a598bc86e53e8e2cabaccfaaa646f7194ad1d68fb1f2d082abb4c373ba21 not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.239445 4896 scope.go:117] "RemoveContainer" containerID="c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.239963 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\": container with ID starting with c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a not found: ID does not exist" containerID="c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.239992 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a"} err="failed to get container status \"c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\": rpc error: code = NotFound desc = could not find container \"c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a\": container with ID starting with c4a9bee23881962f2133623a6ee16e522521fd599713cbaa216afe12d4115e1a not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.240009 4896 scope.go:117] "RemoveContainer" containerID="ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.240338 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\": container with ID starting with ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a not found: ID does not exist" containerID="ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.240363 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a"} err="failed to get container status \"ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\": rpc error: code = NotFound desc = could not find container \"ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a\": container with ID starting with ac86699ee65156e5d42935f769a532b448c106e8d060af1a0c96e20212e9175a not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.240377 4896 scope.go:117] "RemoveContainer" containerID="abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.240554 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\": container with ID starting with abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64 not found: ID does not exist" containerID="abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.240599 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64"} err="failed to get container status \"abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\": rpc error: code = NotFound desc = could not find container \"abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64\": container with ID starting with abe95656e152b83536a743cb72aec9b19ec8fbed3047275e380b3c8f6bc3af64 not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.240612 4896 scope.go:117] "RemoveContainer" containerID="2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.241103 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\": container with ID starting with 2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021 not found: ID does not exist" containerID="2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.241149 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021"} err="failed to get container status \"2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\": rpc error: code = NotFound desc = could not find container \"2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021\": container with ID starting with 2715ba24ed26f339c3c0eef382e5554d2b243ae29faa42137eb593745da31021 not found: ID does not exist" Jan 26 15:38:16 crc kubenswrapper[4896]: E0126 15:38:16.392481 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Jan 26 15:38:16 crc kubenswrapper[4896]: I0126 15:38:16.765734 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 15:38:17 crc kubenswrapper[4896]: E0126 15:38:17.861562 4896 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:17 crc kubenswrapper[4896]: I0126 15:38:17.862022 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:17 crc kubenswrapper[4896]: W0126 15:38:17.887209 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1b550ac4672a7a0bf37fbdf45d0708f6cef78073ea262e28f5509e7d1717546f WatchSource:0}: Error finding container 1b550ac4672a7a0bf37fbdf45d0708f6cef78073ea262e28f5509e7d1717546f: Status 404 returned error can't find the container with id 1b550ac4672a7a0bf37fbdf45d0708f6cef78073ea262e28f5509e7d1717546f Jan 26 15:38:17 crc kubenswrapper[4896]: E0126 15:38:17.891304 4896 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5207fe6ec2a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:38:17.890103974 +0000 UTC m=+255.671984367,LastTimestamp:2026-01-26 15:38:17.890103974 +0000 UTC m=+255.671984367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:38:18 crc kubenswrapper[4896]: I0126 15:38:18.139461 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b550ac4672a7a0bf37fbdf45d0708f6cef78073ea262e28f5509e7d1717546f"} Jan 26 15:38:18 crc kubenswrapper[4896]: E0126 15:38:18.842451 4896 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5207fe6ec2a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 15:38:17.890103974 +0000 UTC m=+255.671984367,LastTimestamp:2026-01-26 15:38:17.890103974 +0000 UTC m=+255.671984367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 15:38:19 crc kubenswrapper[4896]: I0126 15:38:19.149285 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711"} Jan 26 15:38:19 crc kubenswrapper[4896]: E0126 15:38:19.150193 4896 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:19 crc kubenswrapper[4896]: I0126 15:38:19.150426 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:19 crc kubenswrapper[4896]: E0126 15:38:19.594479 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="6.4s" Jan 26 15:38:20 crc kubenswrapper[4896]: E0126 15:38:20.155311 4896 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:38:22 crc kubenswrapper[4896]: I0126 15:38:22.761633 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:23 crc kubenswrapper[4896]: E0126 15:38:23.826510 4896 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" volumeName="registry-storage" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.240066 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T15:38:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:024b1ed0676c2e11f6a319392c82e7acd0ceeae31ca00b202307c4d86a796b20\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ada03173793960eaa0e4263282fcbf5af3dea8aaf2c3b0d864906108db062e8a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1672061160},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:700a5f979fa4ef2b6f03177e68a780c3d93e2a6f429cdaa50e43997cf400e60c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ffe83dddbe52f5e67e1462a3d99eed5cbcb1385f1a99af0cb768e4968931dc8c\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203425009},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:baf4eb931aab99ddd36e09d79f76ea1128c2ef536e95b78edb9af73175db2be3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:dfb030ab67faacd3572a0cae805bd05f041ba6a589cf6fb289cb2295f364c580\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1183907051},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:169566a3a0bc4f9ca64256bb682df6ad4e2cfc5740b5338370c8202d43621680\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:5e18cee5ade3fc0cec09a5ee469d5840c7f50ec0cda6b90150394ad661ac5380\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.240721 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.241022 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.241295 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.241537 4896 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:24 crc kubenswrapper[4896]: E0126 15:38:24.241565 4896 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 15:38:25 crc kubenswrapper[4896]: I0126 15:38:25.951346 4896 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 15:38:25 crc kubenswrapper[4896]: I0126 15:38:25.952075 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 15:38:25 crc kubenswrapper[4896]: E0126 15:38:25.996080 4896 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="7s" Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.191043 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.191447 4896 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d" exitCode=1 Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.191516 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d"} Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.192518 4896 scope.go:117] "RemoveContainer" containerID="c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d" Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.193108 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:26 crc kubenswrapper[4896]: I0126 15:38:26.193614 4896 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.200279 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.200628 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4efaa12b7d99b6a99a3d980d7e883403794e680c3d1321005022ec1dfcdfd5bd"} Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.201640 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.202462 4896 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.758397 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.759266 4896 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.759767 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.774333 4896 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.774367 4896 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:27 crc kubenswrapper[4896]: E0126 15:38:27.775421 4896 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:27 crc kubenswrapper[4896]: I0126 15:38:27.776236 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208101 4896 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4462aaafc8780924b9e6770b7a8cddcde5a32976553718a0c602b93b0176726a" exitCode=0 Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208170 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4462aaafc8780924b9e6770b7a8cddcde5a32976553718a0c602b93b0176726a"} Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208222 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e10486d708be9f44d602567e3bedf63401176e419aa83cd3aff0fdd91e9f2e80"} Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208714 4896 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208739 4896 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.208963 4896 status_manager.go:851] "Failed to get status for pod" podUID="59bc256c-1090-4120-8803-f24252f01812" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:28 crc kubenswrapper[4896]: I0126 15:38:28.209242 4896 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 26 15:38:28 crc kubenswrapper[4896]: E0126 15:38:28.209249 4896 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:29 crc kubenswrapper[4896]: I0126 15:38:29.220167 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c49a241d79dca355030a145739129d63646ac4cfa631b9f03c20a2dbfc64602e"} Jan 26 15:38:29 crc kubenswrapper[4896]: I0126 15:38:29.220460 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee0dea3e84d3dfce3c9de36fcfcb7aea7f774fc791f655f777ac59cbd4d36411"} Jan 26 15:38:29 crc kubenswrapper[4896]: I0126 15:38:29.220472 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6898d96b36d2f80f5d532e02f86d60536712795785bf4c140a4c5f5f6e8347c6"} Jan 26 15:38:29 crc kubenswrapper[4896]: I0126 15:38:29.220482 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0600617ef52f8c3a7c0eb674d7ebd29f0e57c59985022d152d05bfee9543a32"} Jan 26 15:38:29 crc kubenswrapper[4896]: I0126 15:38:29.658600 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:30 crc kubenswrapper[4896]: I0126 15:38:30.235029 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0688cc10460f846b0d0244258e0af858a2b211717edd5f45e4e75671a678ff9b"} Jan 26 15:38:30 crc kubenswrapper[4896]: I0126 15:38:30.235233 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:30 crc kubenswrapper[4896]: I0126 15:38:30.235425 4896 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:30 crc kubenswrapper[4896]: I0126 15:38:30.235455 4896 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:32 crc kubenswrapper[4896]: I0126 15:38:32.111544 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:32 crc kubenswrapper[4896]: I0126 15:38:32.117507 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:32 crc kubenswrapper[4896]: I0126 15:38:32.776795 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:32 crc kubenswrapper[4896]: I0126 15:38:32.776849 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:32 crc kubenswrapper[4896]: I0126 15:38:32.783261 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:35 crc kubenswrapper[4896]: I0126 15:38:35.250527 4896 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:35 crc kubenswrapper[4896]: I0126 15:38:35.315103 4896 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="893580d9-b675-4025-aa42-0d62a771266b" Jan 26 15:38:36 crc kubenswrapper[4896]: I0126 15:38:36.266776 4896 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:36 crc kubenswrapper[4896]: I0126 15:38:36.266805 4896 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:36 crc kubenswrapper[4896]: I0126 15:38:36.269806 4896 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="893580d9-b675-4025-aa42-0d62a771266b" Jan 26 15:38:36 crc kubenswrapper[4896]: I0126 15:38:36.272561 4896 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://f0600617ef52f8c3a7c0eb674d7ebd29f0e57c59985022d152d05bfee9543a32" Jan 26 15:38:36 crc kubenswrapper[4896]: I0126 15:38:36.272624 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:37 crc kubenswrapper[4896]: I0126 15:38:37.278050 4896 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="893580d9-b675-4025-aa42-0d62a771266b" Jan 26 15:38:37 crc kubenswrapper[4896]: I0126 15:38:37.279344 4896 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:37 crc kubenswrapper[4896]: I0126 15:38:37.279393 4896 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="42ec8793-6e16-4368-84e3-9c3007499c92" Jan 26 15:38:39 crc kubenswrapper[4896]: I0126 15:38:39.666061 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 15:38:44 crc kubenswrapper[4896]: I0126 15:38:44.960316 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 15:38:45 crc kubenswrapper[4896]: I0126 15:38:45.755835 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 15:38:45 crc kubenswrapper[4896]: I0126 15:38:45.863732 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.023790 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.197317 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.227380 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.340554 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.717835 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.728704 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 15:38:46 crc kubenswrapper[4896]: I0126 15:38:46.787103 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.029919 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.122359 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.256445 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.378780 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.403104 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.615787 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.698836 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.757067 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.779963 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.799768 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.804675 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.929734 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 15:38:47 crc kubenswrapper[4896]: I0126 15:38:47.954095 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.076071 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.136613 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.153915 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.206395 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.338199 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.400255 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.639667 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.731013 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.760501 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.772016 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.803121 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.817873 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.866889 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.897005 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.913835 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.918242 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 15:38:48 crc kubenswrapper[4896]: I0126 15:38:48.929397 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.020733 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.070458 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.125814 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.128358 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.159842 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.175510 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.177952 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.191566 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.207482 4896 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.250979 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.322702 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.432312 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.436357 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.476754 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.480128 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.489094 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.497223 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.703129 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.765391 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.771965 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.805897 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.878213 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.910629 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.919175 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 15:38:49 crc kubenswrapper[4896]: I0126 15:38:49.967182 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.065088 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.095572 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.097262 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.154258 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.188347 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.205237 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.304261 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.347079 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.374896 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.478723 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.480303 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.506522 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.604608 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.623110 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.646357 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.798919 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.881318 4896 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 15:38:50 crc kubenswrapper[4896]: I0126 15:38:50.921414 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.013622 4896 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.015592 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.022024 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.022102 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.029324 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.043551 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.043531855 podStartE2EDuration="16.043531855s" podCreationTimestamp="2026-01-26 15:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:38:51.038782094 +0000 UTC m=+288.820662497" watchObservedRunningTime="2026-01-26 15:38:51.043531855 +0000 UTC m=+288.825412248" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.060639 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.077219 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.088248 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.088280 4896 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.128643 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.150945 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.208701 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.243038 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.295990 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.302132 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.459071 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.496497 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.626705 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.786376 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.861097 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.955459 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 15:38:51 crc kubenswrapper[4896]: I0126 15:38:51.989158 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.059276 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.060919 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.355122 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.383385 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.475443 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.527912 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.597118 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.655354 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.655522 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.665911 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.678570 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.706799 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.727512 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.905158 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.911530 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 15:38:52 crc kubenswrapper[4896]: I0126 15:38:52.958032 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.075674 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.166409 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.276253 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.286650 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.316797 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.365107 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.457942 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.548461 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.605633 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.616030 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.626364 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.660351 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.672310 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.790865 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.806769 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.906929 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 15:38:53 crc kubenswrapper[4896]: I0126 15:38:53.907256 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.001900 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.040330 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.090718 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.150792 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.194602 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.226026 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.296305 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.311836 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.322437 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.334268 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.355968 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.440673 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.467473 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.558777 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.635007 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.711227 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.831411 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.892974 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.910380 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.931020 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 15:38:54 crc kubenswrapper[4896]: I0126 15:38:54.940450 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:54.999962 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.076050 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.103922 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.105071 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.160148 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.165486 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.224973 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.367974 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.469567 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.525477 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.549786 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.603513 4896 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.675870 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.708855 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.712651 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.806877 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.825512 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.836993 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.846595 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 15:38:55 crc kubenswrapper[4896]: I0126 15:38:55.934525 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.022154 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.065102 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.149306 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.157680 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.189233 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.191830 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.201288 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.245384 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.278717 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.347325 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.369077 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.377727 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.388291 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.586223 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.597527 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.621970 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.688773 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.864801 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 15:38:56 crc kubenswrapper[4896]: I0126 15:38:56.938172 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.037431 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.179418 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.206297 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.250789 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.368123 4896 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.442601 4896 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.442880 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711" gracePeriod=5 Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.465936 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.537255 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.641291 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.653331 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.727369 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.790681 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.861385 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 15:38:57 crc kubenswrapper[4896]: I0126 15:38:57.881841 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.070206 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.157503 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.207301 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.219922 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.250126 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.329409 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.387696 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.432676 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.508282 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.534053 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.606114 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.608374 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.612386 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.676844 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.771475 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 15:38:58 crc kubenswrapper[4896]: I0126 15:38:58.811933 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.028712 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.110852 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.124683 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.234008 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.238501 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.278978 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.483234 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.529839 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.574205 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.607808 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.672500 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.739901 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.859868 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 15:38:59 crc kubenswrapper[4896]: I0126 15:38:59.943023 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.065501 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.141974 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.145369 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.363362 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.470352 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 15:39:00 crc kubenswrapper[4896]: I0126 15:39:00.778523 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 15:39:01 crc kubenswrapper[4896]: I0126 15:39:01.040157 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 15:39:01 crc kubenswrapper[4896]: I0126 15:39:01.300332 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 15:39:01 crc kubenswrapper[4896]: I0126 15:39:01.688767 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 15:39:02 crc kubenswrapper[4896]: I0126 15:39:02.117443 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 15:39:02 crc kubenswrapper[4896]: I0126 15:39:02.482804 4896 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 15:39:02 crc kubenswrapper[4896]: I0126 15:39:02.833767 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.055765 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.055867 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.202536 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.202770 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.202804 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.202904 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.203056 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.203952 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204011 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204179 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204374 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204555 4896 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204615 4896 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.204641 4896 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.213940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.306239 4896 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.306304 4896 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.415067 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.415158 4896 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711" exitCode=137 Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.415285 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.415381 4896 scope.go:117] "RemoveContainer" containerID="91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.443264 4896 scope.go:117] "RemoveContainer" containerID="91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711" Jan 26 15:39:03 crc kubenswrapper[4896]: E0126 15:39:03.443886 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711\": container with ID starting with 91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711 not found: ID does not exist" containerID="91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711" Jan 26 15:39:03 crc kubenswrapper[4896]: I0126 15:39:03.443957 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711"} err="failed to get container status \"91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711\": rpc error: code = NotFound desc = could not find container \"91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711\": container with ID starting with 91f6a35331acab29c01ef1aecba50137c8c0fb76aff60f6053161ca41832c711 not found: ID does not exist" Jan 26 15:39:04 crc kubenswrapper[4896]: I0126 15:39:04.768179 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.599807 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.600657 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" podUID="0d634623-470e-42c0-b550-fac7a770530d" containerName="controller-manager" containerID="cri-o://fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3" gracePeriod=30 Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.603378 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.603611 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerName="route-controller-manager" containerID="cri-o://cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77" gracePeriod=30 Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.966487 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:39:25 crc kubenswrapper[4896]: I0126 15:39:25.967057 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.117878 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca\") pod \"37cb3473-29d9-40ae-be5a-5ee548397d58\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.117932 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca\") pod \"0d634623-470e-42c0-b550-fac7a770530d\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.117959 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert\") pod \"0d634623-470e-42c0-b550-fac7a770530d\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118028 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config\") pod \"37cb3473-29d9-40ae-be5a-5ee548397d58\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118057 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr6cd\" (UniqueName: \"kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd\") pod \"37cb3473-29d9-40ae-be5a-5ee548397d58\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118081 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles\") pod \"0d634623-470e-42c0-b550-fac7a770530d\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118108 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert\") pod \"37cb3473-29d9-40ae-be5a-5ee548397d58\" (UID: \"37cb3473-29d9-40ae-be5a-5ee548397d58\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118140 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config\") pod \"0d634623-470e-42c0-b550-fac7a770530d\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118171 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njkp7\" (UniqueName: \"kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7\") pod \"0d634623-470e-42c0-b550-fac7a770530d\" (UID: \"0d634623-470e-42c0-b550-fac7a770530d\") " Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118471 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d634623-470e-42c0-b550-fac7a770530d" (UID: "0d634623-470e-42c0-b550-fac7a770530d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118655 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca" (OuterVolumeSpecName: "client-ca") pod "37cb3473-29d9-40ae-be5a-5ee548397d58" (UID: "37cb3473-29d9-40ae-be5a-5ee548397d58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118956 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config" (OuterVolumeSpecName: "config") pod "37cb3473-29d9-40ae-be5a-5ee548397d58" (UID: "37cb3473-29d9-40ae-be5a-5ee548397d58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.118996 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d634623-470e-42c0-b550-fac7a770530d" (UID: "0d634623-470e-42c0-b550-fac7a770530d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.119084 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config" (OuterVolumeSpecName: "config") pod "0d634623-470e-42c0-b550-fac7a770530d" (UID: "0d634623-470e-42c0-b550-fac7a770530d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.124009 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37cb3473-29d9-40ae-be5a-5ee548397d58" (UID: "37cb3473-29d9-40ae-be5a-5ee548397d58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.124169 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d634623-470e-42c0-b550-fac7a770530d" (UID: "0d634623-470e-42c0-b550-fac7a770530d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.124390 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd" (OuterVolumeSpecName: "kube-api-access-rr6cd") pod "37cb3473-29d9-40ae-be5a-5ee548397d58" (UID: "37cb3473-29d9-40ae-be5a-5ee548397d58"). InnerVolumeSpecName "kube-api-access-rr6cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.124855 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7" (OuterVolumeSpecName: "kube-api-access-njkp7") pod "0d634623-470e-42c0-b550-fac7a770530d" (UID: "0d634623-470e-42c0-b550-fac7a770530d"). InnerVolumeSpecName "kube-api-access-njkp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219409 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219453 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d634623-470e-42c0-b550-fac7a770530d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219469 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219481 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37cb3473-29d9-40ae-be5a-5ee548397d58-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219493 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr6cd\" (UniqueName: \"kubernetes.io/projected/37cb3473-29d9-40ae-be5a-5ee548397d58-kube-api-access-rr6cd\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219507 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219520 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cb3473-29d9-40ae-be5a-5ee548397d58-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219531 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d634623-470e-42c0-b550-fac7a770530d-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.219543 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njkp7\" (UniqueName: \"kubernetes.io/projected/0d634623-470e-42c0-b550-fac7a770530d-kube-api-access-njkp7\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.556703 4896 generic.go:334] "Generic (PLEG): container finished" podID="0d634623-470e-42c0-b550-fac7a770530d" containerID="fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3" exitCode=0 Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.556752 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" event={"ID":"0d634623-470e-42c0-b550-fac7a770530d","Type":"ContainerDied","Data":"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3"} Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.556789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" event={"ID":"0d634623-470e-42c0-b550-fac7a770530d","Type":"ContainerDied","Data":"f37da726568c42ca632af6d32d9e3d9611b9023722816b2e8519ce2aee1e843b"} Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.556789 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nvtk" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.556848 4896 scope.go:117] "RemoveContainer" containerID="fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.558773 4896 generic.go:334] "Generic (PLEG): container finished" podID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerID="cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77" exitCode=0 Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.558823 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" event={"ID":"37cb3473-29d9-40ae-be5a-5ee548397d58","Type":"ContainerDied","Data":"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77"} Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.558829 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.558841 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw" event={"ID":"37cb3473-29d9-40ae-be5a-5ee548397d58","Type":"ContainerDied","Data":"88707bc9ea71b576c224171e5a7f689b0910b7612a85b4dbd1ee93fc73506b4c"} Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.591748 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.594311 4896 scope.go:117] "RemoveContainer" containerID="fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3" Jan 26 15:39:26 crc kubenswrapper[4896]: E0126 15:39:26.594935 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3\": container with ID starting with fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3 not found: ID does not exist" containerID="fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.594982 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3"} err="failed to get container status \"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3\": rpc error: code = NotFound desc = could not find container \"fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3\": container with ID starting with fb03cda4dbb4a6da9f0c9b7fbb7e779cb45a27447d81ddfdce6d47f761b36ab3 not found: ID does not exist" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.595010 4896 scope.go:117] "RemoveContainer" containerID="cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.597531 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nvtk"] Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.603513 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.605385 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-772nw"] Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.616244 4896 scope.go:117] "RemoveContainer" containerID="cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77" Jan 26 15:39:26 crc kubenswrapper[4896]: E0126 15:39:26.617937 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77\": container with ID starting with cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77 not found: ID does not exist" containerID="cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.618001 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77"} err="failed to get container status \"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77\": rpc error: code = NotFound desc = could not find container \"cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77\": container with ID starting with cfbc4677becaeefe6879f65ce98e70ec51e5f999f5123d166a5329415a2d4c77 not found: ID does not exist" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.695839 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.765820 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d634623-470e-42c0-b550-fac7a770530d" path="/var/lib/kubelet/pods/0d634623-470e-42c0-b550-fac7a770530d/volumes" Jan 26 15:39:26 crc kubenswrapper[4896]: I0126 15:39:26.766911 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" path="/var/lib/kubelet/pods/37cb3473-29d9-40ae-be5a-5ee548397d58/volumes" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082435 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:27 crc kubenswrapper[4896]: E0126 15:39:27.082709 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d634623-470e-42c0-b550-fac7a770530d" containerName="controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082722 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d634623-470e-42c0-b550-fac7a770530d" containerName="controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: E0126 15:39:27.082733 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerName="route-controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082739 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerName="route-controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: E0126 15:39:27.082754 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bc256c-1090-4120-8803-f24252f01812" containerName="installer" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082761 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bc256c-1090-4120-8803-f24252f01812" containerName="installer" Jan 26 15:39:27 crc kubenswrapper[4896]: E0126 15:39:27.082774 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082779 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082861 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bc256c-1090-4120-8803-f24252f01812" containerName="installer" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082872 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082882 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d634623-470e-42c0-b550-fac7a770530d" containerName="controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.082892 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="37cb3473-29d9-40ae-be5a-5ee548397d58" containerName="route-controller-manager" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.083260 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.086601 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.087343 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.090381 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.090596 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.093539 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.094092 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.094335 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.094708 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.094808 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.094955 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.095508 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.095697 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.096354 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.096523 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.122279 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.125456 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131156 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131438 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969dn\" (UniqueName: \"kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131556 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131662 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131778 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.131901 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.132046 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.132161 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.132304 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7ft\" (UniqueName: \"kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.158764 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.233411 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969dn\" (UniqueName: \"kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.233492 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.233542 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.233568 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.233952 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.234770 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.234844 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235103 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235188 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235231 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235285 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7ft\" (UniqueName: \"kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.235728 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.237314 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.242942 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.242970 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.252234 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969dn\" (UniqueName: \"kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn\") pod \"controller-manager-56c799745b-zlfc2\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.261693 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7ft\" (UniqueName: \"kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft\") pod \"route-controller-manager-ffb978cbf-qt9mh\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.409176 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.419246 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.678176 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:27 crc kubenswrapper[4896]: I0126 15:39:27.714510 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:27 crc kubenswrapper[4896]: W0126 15:39:27.722971 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a9a9904_0d54_472b_a5ce_24da454392c9.slice/crio-e8a40aaacd2dbdec1298e4a67e3c87776cd5ad3a005b41a48f5c8c4033256cb0 WatchSource:0}: Error finding container e8a40aaacd2dbdec1298e4a67e3c87776cd5ad3a005b41a48f5c8c4033256cb0: Status 404 returned error can't find the container with id e8a40aaacd2dbdec1298e4a67e3c87776cd5ad3a005b41a48f5c8c4033256cb0 Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.573702 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" event={"ID":"9a9a9904-0d54-472b-a5ce-24da454392c9","Type":"ContainerStarted","Data":"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280"} Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.573971 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" event={"ID":"9a9a9904-0d54-472b-a5ce-24da454392c9","Type":"ContainerStarted","Data":"e8a40aaacd2dbdec1298e4a67e3c87776cd5ad3a005b41a48f5c8c4033256cb0"} Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.573993 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.575628 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" event={"ID":"ef0e1530-afdd-48c8-84cc-e0781132dda4","Type":"ContainerStarted","Data":"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015"} Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.575693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" event={"ID":"ef0e1530-afdd-48c8-84cc-e0781132dda4","Type":"ContainerStarted","Data":"33f8de4b0075e997090585a3c9551da9d60ebb274f6eff1f63a7fe1b3dc986c2"} Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.575861 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.579361 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.582307 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.590970 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" podStartSLOduration=3.590945685 podStartE2EDuration="3.590945685s" podCreationTimestamp="2026-01-26 15:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:28.588504467 +0000 UTC m=+326.370384860" watchObservedRunningTime="2026-01-26 15:39:28.590945685 +0000 UTC m=+326.372826078" Jan 26 15:39:28 crc kubenswrapper[4896]: I0126 15:39:28.608628 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" podStartSLOduration=3.6086083589999998 podStartE2EDuration="3.608608359s" podCreationTimestamp="2026-01-26 15:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:28.604325632 +0000 UTC m=+326.386206025" watchObservedRunningTime="2026-01-26 15:39:28.608608359 +0000 UTC m=+326.390488752" Jan 26 15:39:31 crc kubenswrapper[4896]: I0126 15:39:31.182887 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 15:39:31 crc kubenswrapper[4896]: I0126 15:39:31.764272 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 15:39:34 crc kubenswrapper[4896]: I0126 15:39:34.534227 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:34 crc kubenswrapper[4896]: I0126 15:39:34.534543 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" podUID="ef0e1530-afdd-48c8-84cc-e0781132dda4" containerName="controller-manager" containerID="cri-o://9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015" gracePeriod=30 Jan 26 15:39:34 crc kubenswrapper[4896]: I0126 15:39:34.551333 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:34 crc kubenswrapper[4896]: I0126 15:39:34.551550 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" podUID="9a9a9904-0d54-472b-a5ce-24da454392c9" containerName="route-controller-manager" containerID="cri-o://caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280" gracePeriod=30 Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.002427 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.050198 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150061 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert\") pod \"9a9a9904-0d54-472b-a5ce-24da454392c9\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150131 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config\") pod \"9a9a9904-0d54-472b-a5ce-24da454392c9\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150167 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles\") pod \"ef0e1530-afdd-48c8-84cc-e0781132dda4\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150214 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv7ft\" (UniqueName: \"kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft\") pod \"9a9a9904-0d54-472b-a5ce-24da454392c9\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150238 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config\") pod \"ef0e1530-afdd-48c8-84cc-e0781132dda4\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150261 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca\") pod \"9a9a9904-0d54-472b-a5ce-24da454392c9\" (UID: \"9a9a9904-0d54-472b-a5ce-24da454392c9\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150341 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969dn\" (UniqueName: \"kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn\") pod \"ef0e1530-afdd-48c8-84cc-e0781132dda4\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150387 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert\") pod \"ef0e1530-afdd-48c8-84cc-e0781132dda4\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.150967 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef0e1530-afdd-48c8-84cc-e0781132dda4" (UID: "ef0e1530-afdd-48c8-84cc-e0781132dda4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.151038 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config" (OuterVolumeSpecName: "config") pod "9a9a9904-0d54-472b-a5ce-24da454392c9" (UID: "9a9a9904-0d54-472b-a5ce-24da454392c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.151060 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config" (OuterVolumeSpecName: "config") pod "ef0e1530-afdd-48c8-84cc-e0781132dda4" (UID: "ef0e1530-afdd-48c8-84cc-e0781132dda4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.151531 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a9a9904-0d54-472b-a5ce-24da454392c9" (UID: "9a9a9904-0d54-472b-a5ce-24da454392c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.157957 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn" (OuterVolumeSpecName: "kube-api-access-969dn") pod "ef0e1530-afdd-48c8-84cc-e0781132dda4" (UID: "ef0e1530-afdd-48c8-84cc-e0781132dda4"). InnerVolumeSpecName "kube-api-access-969dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.157954 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a9a9904-0d54-472b-a5ce-24da454392c9" (UID: "9a9a9904-0d54-472b-a5ce-24da454392c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.157984 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef0e1530-afdd-48c8-84cc-e0781132dda4" (UID: "ef0e1530-afdd-48c8-84cc-e0781132dda4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.158062 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft" (OuterVolumeSpecName: "kube-api-access-bv7ft") pod "9a9a9904-0d54-472b-a5ce-24da454392c9" (UID: "9a9a9904-0d54-472b-a5ce-24da454392c9"). InnerVolumeSpecName "kube-api-access-bv7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251163 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca\") pod \"ef0e1530-afdd-48c8-84cc-e0781132dda4\" (UID: \"ef0e1530-afdd-48c8-84cc-e0781132dda4\") " Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251533 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9a9904-0d54-472b-a5ce-24da454392c9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251559 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251571 4896 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251604 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv7ft\" (UniqueName: \"kubernetes.io/projected/9a9a9904-0d54-472b-a5ce-24da454392c9-kube-api-access-bv7ft\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251617 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251627 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a9a9904-0d54-472b-a5ce-24da454392c9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251641 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969dn\" (UniqueName: \"kubernetes.io/projected/ef0e1530-afdd-48c8-84cc-e0781132dda4-kube-api-access-969dn\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251652 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0e1530-afdd-48c8-84cc-e0781132dda4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.251805 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef0e1530-afdd-48c8-84cc-e0781132dda4" (UID: "ef0e1530-afdd-48c8-84cc-e0781132dda4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.353208 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0e1530-afdd-48c8-84cc-e0781132dda4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.618505 4896 generic.go:334] "Generic (PLEG): container finished" podID="ef0e1530-afdd-48c8-84cc-e0781132dda4" containerID="9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015" exitCode=0 Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.618568 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" event={"ID":"ef0e1530-afdd-48c8-84cc-e0781132dda4","Type":"ContainerDied","Data":"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015"} Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.618638 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" event={"ID":"ef0e1530-afdd-48c8-84cc-e0781132dda4","Type":"ContainerDied","Data":"33f8de4b0075e997090585a3c9551da9d60ebb274f6eff1f63a7fe1b3dc986c2"} Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.618594 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56c799745b-zlfc2" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.618669 4896 scope.go:117] "RemoveContainer" containerID="9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.622956 4896 generic.go:334] "Generic (PLEG): container finished" podID="9a9a9904-0d54-472b-a5ce-24da454392c9" containerID="caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280" exitCode=0 Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.622984 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" event={"ID":"9a9a9904-0d54-472b-a5ce-24da454392c9","Type":"ContainerDied","Data":"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280"} Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.623008 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" event={"ID":"9a9a9904-0d54-472b-a5ce-24da454392c9","Type":"ContainerDied","Data":"e8a40aaacd2dbdec1298e4a67e3c87776cd5ad3a005b41a48f5c8c4033256cb0"} Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.623055 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.645252 4896 scope.go:117] "RemoveContainer" containerID="9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015" Jan 26 15:39:35 crc kubenswrapper[4896]: E0126 15:39:35.646062 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015\": container with ID starting with 9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015 not found: ID does not exist" containerID="9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.646109 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015"} err="failed to get container status \"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015\": rpc error: code = NotFound desc = could not find container \"9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015\": container with ID starting with 9a48e028e1bb33d0aeea9a47eb7ee44681ef387c7a9a7734702a34faca4ad015 not found: ID does not exist" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.646131 4896 scope.go:117] "RemoveContainer" containerID="caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.649911 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.662260 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56c799745b-zlfc2"] Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.671827 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.678424 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffb978cbf-qt9mh"] Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.678780 4896 scope.go:117] "RemoveContainer" containerID="caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280" Jan 26 15:39:35 crc kubenswrapper[4896]: E0126 15:39:35.679202 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280\": container with ID starting with caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280 not found: ID does not exist" containerID="caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280" Jan 26 15:39:35 crc kubenswrapper[4896]: I0126 15:39:35.679225 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280"} err="failed to get container status \"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280\": rpc error: code = NotFound desc = could not find container \"caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280\": container with ID starting with caf078b3c0971eb9c854c06715968e5d8c86945f054bc349bf69130a91914280 not found: ID does not exist" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.088728 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk"] Jan 26 15:39:36 crc kubenswrapper[4896]: E0126 15:39:36.089073 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0e1530-afdd-48c8-84cc-e0781132dda4" containerName="controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.089099 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0e1530-afdd-48c8-84cc-e0781132dda4" containerName="controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: E0126 15:39:36.089124 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9a9904-0d54-472b-a5ce-24da454392c9" containerName="route-controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.089138 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9a9904-0d54-472b-a5ce-24da454392c9" containerName="route-controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.089348 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9a9904-0d54-472b-a5ce-24da454392c9" containerName="route-controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.089398 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0e1530-afdd-48c8-84cc-e0781132dda4" containerName="controller-manager" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.090099 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.092564 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.092901 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.092944 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.093075 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.093398 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.093873 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.094711 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.096400 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.096983 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.097288 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.098042 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.098082 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.098185 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.098331 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.107913 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.115350 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.122392 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk"] Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164768 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-proxy-ca-bundles\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164820 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164850 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76caded-91d8-4e0c-80c4-f6423d1178e6-serving-cert\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164867 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-client-ca\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164883 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-config\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164898 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.164964 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.165024 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86w8\" (UniqueName: \"kubernetes.io/projected/f76caded-91d8-4e0c-80c4-f6423d1178e6-kube-api-access-w86w8\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.165051 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5ltr\" (UniqueName: \"kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266101 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266393 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w86w8\" (UniqueName: \"kubernetes.io/projected/f76caded-91d8-4e0c-80c4-f6423d1178e6-kube-api-access-w86w8\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266416 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5ltr\" (UniqueName: \"kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266435 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-proxy-ca-bundles\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266479 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266512 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76caded-91d8-4e0c-80c4-f6423d1178e6-serving-cert\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266536 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-client-ca\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266603 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-config\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.266625 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.267679 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-client-ca\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.268208 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-config\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.268442 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f76caded-91d8-4e0c-80c4-f6423d1178e6-proxy-ca-bundles\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.268956 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.269038 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.273319 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76caded-91d8-4e0c-80c4-f6423d1178e6-serving-cert\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.279627 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.284985 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86w8\" (UniqueName: \"kubernetes.io/projected/f76caded-91d8-4e0c-80c4-f6423d1178e6-kube-api-access-w86w8\") pod \"controller-manager-7b7b5dc8c8-j5lsk\" (UID: \"f76caded-91d8-4e0c-80c4-f6423d1178e6\") " pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.285296 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5ltr\" (UniqueName: \"kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr\") pod \"route-controller-manager-67855659d4-nktx6\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.417505 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.425596 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.767082 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9a9904-0d54-472b-a5ce-24da454392c9" path="/var/lib/kubelet/pods/9a9a9904-0d54-472b-a5ce-24da454392c9/volumes" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.768072 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0e1530-afdd-48c8-84cc-e0781132dda4" path="/var/lib/kubelet/pods/ef0e1530-afdd-48c8-84cc-e0781132dda4/volumes" Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.869683 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk"] Jan 26 15:39:36 crc kubenswrapper[4896]: I0126 15:39:36.882813 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:36 crc kubenswrapper[4896]: W0126 15:39:36.885084 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode90e516b_971e_4f8e_84d7_6a290b1abae3.slice/crio-b5bd10b3c6c8114833de8d8153c638f150592e81458b48cf8310d83102770e2b WatchSource:0}: Error finding container b5bd10b3c6c8114833de8d8153c638f150592e81458b48cf8310d83102770e2b: Status 404 returned error can't find the container with id b5bd10b3c6c8114833de8d8153c638f150592e81458b48cf8310d83102770e2b Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.643410 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" event={"ID":"e90e516b-971e-4f8e-84d7-6a290b1abae3","Type":"ContainerStarted","Data":"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794"} Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.643843 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.643860 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" event={"ID":"e90e516b-971e-4f8e-84d7-6a290b1abae3","Type":"ContainerStarted","Data":"b5bd10b3c6c8114833de8d8153c638f150592e81458b48cf8310d83102770e2b"} Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.646617 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" event={"ID":"f76caded-91d8-4e0c-80c4-f6423d1178e6","Type":"ContainerStarted","Data":"c75531be4d6f2e0d64bb2b80ca904327d93a75fcb5bde7241d3eab6c10f07994"} Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.646664 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" event={"ID":"f76caded-91d8-4e0c-80c4-f6423d1178e6","Type":"ContainerStarted","Data":"82b490df7d739e942844198509aca16cb14c46399ec6613cf65e6c8bf4751f60"} Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.646854 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.651056 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.652401 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.672060 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" podStartSLOduration=3.672037662 podStartE2EDuration="3.672037662s" podCreationTimestamp="2026-01-26 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:37.671188119 +0000 UTC m=+335.453068502" watchObservedRunningTime="2026-01-26 15:39:37.672037662 +0000 UTC m=+335.453918055" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.726370 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b7b5dc8c8-j5lsk" podStartSLOduration=3.726348415 podStartE2EDuration="3.726348415s" podCreationTimestamp="2026-01-26 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:37.701434882 +0000 UTC m=+335.483315295" watchObservedRunningTime="2026-01-26 15:39:37.726348415 +0000 UTC m=+335.508228808" Jan 26 15:39:37 crc kubenswrapper[4896]: I0126 15:39:37.900228 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:39 crc kubenswrapper[4896]: I0126 15:39:39.657306 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" podUID="e90e516b-971e-4f8e-84d7-6a290b1abae3" containerName="route-controller-manager" containerID="cri-o://2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794" gracePeriod=30 Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.152216 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.206519 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv"] Jan 26 15:39:40 crc kubenswrapper[4896]: E0126 15:39:40.206737 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90e516b-971e-4f8e-84d7-6a290b1abae3" containerName="route-controller-manager" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.206748 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90e516b-971e-4f8e-84d7-6a290b1abae3" containerName="route-controller-manager" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.206842 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90e516b-971e-4f8e-84d7-6a290b1abae3" containerName="route-controller-manager" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.207193 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217679 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config\") pod \"e90e516b-971e-4f8e-84d7-6a290b1abae3\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217740 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5ltr\" (UniqueName: \"kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr\") pod \"e90e516b-971e-4f8e-84d7-6a290b1abae3\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217806 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca\") pod \"e90e516b-971e-4f8e-84d7-6a290b1abae3\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217841 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert\") pod \"e90e516b-971e-4f8e-84d7-6a290b1abae3\" (UID: \"e90e516b-971e-4f8e-84d7-6a290b1abae3\") " Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217959 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdzn8\" (UniqueName: \"kubernetes.io/projected/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-kube-api-access-fdzn8\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.217989 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-config\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.218029 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-serving-cert\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.218088 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-client-ca\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.218599 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e90e516b-971e-4f8e-84d7-6a290b1abae3" (UID: "e90e516b-971e-4f8e-84d7-6a290b1abae3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.218750 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config" (OuterVolumeSpecName: "config") pod "e90e516b-971e-4f8e-84d7-6a290b1abae3" (UID: "e90e516b-971e-4f8e-84d7-6a290b1abae3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.239863 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv"] Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.242962 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e90e516b-971e-4f8e-84d7-6a290b1abae3" (UID: "e90e516b-971e-4f8e-84d7-6a290b1abae3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.243204 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr" (OuterVolumeSpecName: "kube-api-access-z5ltr") pod "e90e516b-971e-4f8e-84d7-6a290b1abae3" (UID: "e90e516b-971e-4f8e-84d7-6a290b1abae3"). InnerVolumeSpecName "kube-api-access-z5ltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.319170 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-client-ca\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.319238 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdzn8\" (UniqueName: \"kubernetes.io/projected/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-kube-api-access-fdzn8\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.319295 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-config\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.319858 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-serving-cert\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.320259 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-client-ca\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.320284 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.320317 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5ltr\" (UniqueName: \"kubernetes.io/projected/e90e516b-971e-4f8e-84d7-6a290b1abae3-kube-api-access-z5ltr\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.320338 4896 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e90e516b-971e-4f8e-84d7-6a290b1abae3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.320358 4896 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90e516b-971e-4f8e-84d7-6a290b1abae3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.321148 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-config\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.323812 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-serving-cert\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.339372 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdzn8\" (UniqueName: \"kubernetes.io/projected/dcbc6824-911d-42eb-b2fc-92fe68ae55c2-kube-api-access-fdzn8\") pod \"route-controller-manager-5854dcbb9-pwkdv\" (UID: \"dcbc6824-911d-42eb-b2fc-92fe68ae55c2\") " pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.526798 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.664067 4896 generic.go:334] "Generic (PLEG): container finished" podID="e90e516b-971e-4f8e-84d7-6a290b1abae3" containerID="2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794" exitCode=0 Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.664140 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.664170 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" event={"ID":"e90e516b-971e-4f8e-84d7-6a290b1abae3","Type":"ContainerDied","Data":"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794"} Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.664526 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6" event={"ID":"e90e516b-971e-4f8e-84d7-6a290b1abae3","Type":"ContainerDied","Data":"b5bd10b3c6c8114833de8d8153c638f150592e81458b48cf8310d83102770e2b"} Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.664572 4896 scope.go:117] "RemoveContainer" containerID="2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.695272 4896 scope.go:117] "RemoveContainer" containerID="2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794" Jan 26 15:39:40 crc kubenswrapper[4896]: E0126 15:39:40.696015 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794\": container with ID starting with 2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794 not found: ID does not exist" containerID="2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.696073 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794"} err="failed to get container status \"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794\": rpc error: code = NotFound desc = could not find container \"2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794\": container with ID starting with 2aaaf563c8920d441e57918a9b9515e7498225e1586969d6ccde4c0dfece3794 not found: ID does not exist" Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.696968 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.703129 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67855659d4-nktx6"] Jan 26 15:39:40 crc kubenswrapper[4896]: I0126 15:39:40.768750 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90e516b-971e-4f8e-84d7-6a290b1abae3" path="/var/lib/kubelet/pods/e90e516b-971e-4f8e-84d7-6a290b1abae3/volumes" Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.016349 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv"] Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.673647 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" event={"ID":"dcbc6824-911d-42eb-b2fc-92fe68ae55c2","Type":"ContainerStarted","Data":"26f094eef39e335acb34a3294a9cf9889321d6f22dd60a0afa8fb772f34cfe28"} Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.674345 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" event={"ID":"dcbc6824-911d-42eb-b2fc-92fe68ae55c2","Type":"ContainerStarted","Data":"b679156bed6c148418478ab2adc5b099c49fc2eafdc1bb054a2c9e045ae2c978"} Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.674369 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.681645 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" Jan 26 15:39:41 crc kubenswrapper[4896]: I0126 15:39:41.716036 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5854dcbb9-pwkdv" podStartSLOduration=4.716018042 podStartE2EDuration="4.716018042s" podCreationTimestamp="2026-01-26 15:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:41.697516154 +0000 UTC m=+339.479396647" watchObservedRunningTime="2026-01-26 15:39:41.716018042 +0000 UTC m=+339.497898435" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.121249 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7pnqc"] Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.122959 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.136392 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7pnqc"] Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250732 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80b2d4af-5500-477f-bf17-fa50b0116b02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250793 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-tls\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-trusted-ca\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250862 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250885 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-certificates\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250920 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80b2d4af-5500-477f-bf17-fa50b0116b02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250955 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzpkp\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-kube-api-access-pzpkp\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.250972 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-bound-sa-token\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.272870 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351638 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80b2d4af-5500-477f-bf17-fa50b0116b02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351688 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzpkp\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-kube-api-access-pzpkp\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351710 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-bound-sa-token\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351732 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80b2d4af-5500-477f-bf17-fa50b0116b02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351754 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-tls\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351778 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-trusted-ca\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.351816 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-certificates\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.352172 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/80b2d4af-5500-477f-bf17-fa50b0116b02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.352949 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-certificates\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.353202 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80b2d4af-5500-477f-bf17-fa50b0116b02-trusted-ca\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.362398 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-registry-tls\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.362446 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/80b2d4af-5500-477f-bf17-fa50b0116b02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.368726 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-bound-sa-token\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.371977 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzpkp\" (UniqueName: \"kubernetes.io/projected/80b2d4af-5500-477f-bf17-fa50b0116b02-kube-api-access-pzpkp\") pod \"image-registry-66df7c8f76-7pnqc\" (UID: \"80b2d4af-5500-477f-bf17-fa50b0116b02\") " pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.441204 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:06 crc kubenswrapper[4896]: I0126 15:40:06.844953 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7pnqc"] Jan 26 15:40:07 crc kubenswrapper[4896]: I0126 15:40:07.821966 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" event={"ID":"80b2d4af-5500-477f-bf17-fa50b0116b02","Type":"ContainerStarted","Data":"9f256f9336890e2f8a5b23fc36223245a2bfe60cd69cca3142b30e9a1ba685b6"} Jan 26 15:40:07 crc kubenswrapper[4896]: I0126 15:40:07.822296 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:07 crc kubenswrapper[4896]: I0126 15:40:07.822311 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" event={"ID":"80b2d4af-5500-477f-bf17-fa50b0116b02","Type":"ContainerStarted","Data":"300af468999009c7cc8600b227040bc34c8e395866a5edfb8386fc8d5e20447f"} Jan 26 15:40:07 crc kubenswrapper[4896]: I0126 15:40:07.850539 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" podStartSLOduration=1.850520403 podStartE2EDuration="1.850520403s" podCreationTimestamp="2026-01-26 15:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:07.846241153 +0000 UTC m=+365.628121556" watchObservedRunningTime="2026-01-26 15:40:07.850520403 +0000 UTC m=+365.632400786" Jan 26 15:40:18 crc kubenswrapper[4896]: I0126 15:40:18.814195 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:40:18 crc kubenswrapper[4896]: I0126 15:40:18.814970 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:40:26 crc kubenswrapper[4896]: I0126 15:40:26.447248 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7pnqc" Jan 26 15:40:26 crc kubenswrapper[4896]: I0126 15:40:26.505268 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.107923 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.111039 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sldms" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="registry-server" containerID="cri-o://c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" gracePeriod=30 Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.117501 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.117819 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w5d62" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="registry-server" containerID="cri-o://946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad" gracePeriod=30 Jan 26 15:40:45 crc kubenswrapper[4896]: E0126 15:40:45.126285 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:40:45 crc kubenswrapper[4896]: E0126 15:40:45.129173 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.130719 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.131833 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerName="marketplace-operator" containerID="cri-o://6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd" gracePeriod=30 Jan 26 15:40:45 crc kubenswrapper[4896]: E0126 15:40:45.133446 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:40:45 crc kubenswrapper[4896]: E0126 15:40:45.133513 4896 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-sldms" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="registry-server" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.146145 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.146513 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4f48w" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="registry-server" containerID="cri-o://f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3" gracePeriod=30 Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.160659 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtg7d"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.161383 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.165931 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.166153 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2v2b" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="registry-server" containerID="cri-o://6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8" gracePeriod=30 Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.170187 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtg7d"] Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.195347 4896 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p79qr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.195392 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.213341 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.213531 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.213569 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2p9\" (UniqueName: \"kubernetes.io/projected/22808cdf-7c01-491f-b3f4-d641898edf7b-kube-api-access-9p2p9\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.315701 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.315751 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2p9\" (UniqueName: \"kubernetes.io/projected/22808cdf-7c01-491f-b3f4-d641898edf7b-kube-api-access-9p2p9\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.315836 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.317666 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.324731 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22808cdf-7c01-491f-b3f4-d641898edf7b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.339725 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2p9\" (UniqueName: \"kubernetes.io/projected/22808cdf-7c01-491f-b3f4-d641898edf7b-kube-api-access-9p2p9\") pod \"marketplace-operator-79b997595-gtg7d\" (UID: \"22808cdf-7c01-491f-b3f4-d641898edf7b\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.555094 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.562192 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.620997 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wg4g\" (UniqueName: \"kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g\") pod \"d3369383-b89e-4cc5-8267-3f849ff0c294\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.621122 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities\") pod \"d3369383-b89e-4cc5-8267-3f849ff0c294\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.621207 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content\") pod \"d3369383-b89e-4cc5-8267-3f849ff0c294\" (UID: \"d3369383-b89e-4cc5-8267-3f849ff0c294\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.623692 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities" (OuterVolumeSpecName: "utilities") pod "d3369383-b89e-4cc5-8267-3f849ff0c294" (UID: "d3369383-b89e-4cc5-8267-3f849ff0c294"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.631745 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g" (OuterVolumeSpecName: "kube-api-access-7wg4g") pod "d3369383-b89e-4cc5-8267-3f849ff0c294" (UID: "d3369383-b89e-4cc5-8267-3f849ff0c294"). InnerVolumeSpecName "kube-api-access-7wg4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.672789 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.681866 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.688977 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3369383-b89e-4cc5-8267-3f849ff0c294" (UID: "d3369383-b89e-4cc5-8267-3f849ff0c294"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.700371 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.708980 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.726326 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pcx8\" (UniqueName: \"kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8\") pod \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.726537 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content\") pod \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.726818 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca\") pod \"c290f80c-4e19-4618-9ec2-2bc47df395fd\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.726937 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics\") pod \"c290f80c-4e19-4618-9ec2-2bc47df395fd\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727078 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content\") pod \"40915453-ba7c-41e6-bc4f-3a221097ae62\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727294 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wcl\" (UniqueName: \"kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl\") pod \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727398 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities\") pod \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727508 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities\") pod \"40915453-ba7c-41e6-bc4f-3a221097ae62\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727789 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nttsk\" (UniqueName: \"kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk\") pod \"40915453-ba7c-41e6-bc4f-3a221097ae62\" (UID: \"40915453-ba7c-41e6-bc4f-3a221097ae62\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.727918 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ppd\" (UniqueName: \"kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd\") pod \"c290f80c-4e19-4618-9ec2-2bc47df395fd\" (UID: \"c290f80c-4e19-4618-9ec2-2bc47df395fd\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.728156 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content\") pod \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\" (UID: \"1bb8ed73-cf27-49c5-98cc-79e5a488f604\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.728275 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities\") pod \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\" (UID: \"b52b5ef6-729b-4468-aa67-9a6f645ff27c\") " Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.728714 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wg4g\" (UniqueName: \"kubernetes.io/projected/d3369383-b89e-4cc5-8267-3f849ff0c294-kube-api-access-7wg4g\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.728814 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.728875 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3369383-b89e-4cc5-8267-3f849ff0c294-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.729712 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities" (OuterVolumeSpecName: "utilities") pod "b52b5ef6-729b-4468-aa67-9a6f645ff27c" (UID: "b52b5ef6-729b-4468-aa67-9a6f645ff27c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.734382 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c290f80c-4e19-4618-9ec2-2bc47df395fd" (UID: "c290f80c-4e19-4618-9ec2-2bc47df395fd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.734642 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8" (OuterVolumeSpecName: "kube-api-access-9pcx8") pod "b52b5ef6-729b-4468-aa67-9a6f645ff27c" (UID: "b52b5ef6-729b-4468-aa67-9a6f645ff27c"). InnerVolumeSpecName "kube-api-access-9pcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.735819 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities" (OuterVolumeSpecName: "utilities") pod "40915453-ba7c-41e6-bc4f-3a221097ae62" (UID: "40915453-ba7c-41e6-bc4f-3a221097ae62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.735832 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities" (OuterVolumeSpecName: "utilities") pod "1bb8ed73-cf27-49c5-98cc-79e5a488f604" (UID: "1bb8ed73-cf27-49c5-98cc-79e5a488f604"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.736941 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c290f80c-4e19-4618-9ec2-2bc47df395fd" (UID: "c290f80c-4e19-4618-9ec2-2bc47df395fd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.737461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk" (OuterVolumeSpecName: "kube-api-access-nttsk") pod "40915453-ba7c-41e6-bc4f-3a221097ae62" (UID: "40915453-ba7c-41e6-bc4f-3a221097ae62"). InnerVolumeSpecName "kube-api-access-nttsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.740111 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd" (OuterVolumeSpecName: "kube-api-access-c2ppd") pod "c290f80c-4e19-4618-9ec2-2bc47df395fd" (UID: "c290f80c-4e19-4618-9ec2-2bc47df395fd"). InnerVolumeSpecName "kube-api-access-c2ppd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.762363 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl" (OuterVolumeSpecName: "kube-api-access-97wcl") pod "1bb8ed73-cf27-49c5-98cc-79e5a488f604" (UID: "1bb8ed73-cf27-49c5-98cc-79e5a488f604"). InnerVolumeSpecName "kube-api-access-97wcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.773475 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b52b5ef6-729b-4468-aa67-9a6f645ff27c" (UID: "b52b5ef6-729b-4468-aa67-9a6f645ff27c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.823267 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bb8ed73-cf27-49c5-98cc-79e5a488f604" (UID: "1bb8ed73-cf27-49c5-98cc-79e5a488f604"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830602 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pcx8\" (UniqueName: \"kubernetes.io/projected/b52b5ef6-729b-4468-aa67-9a6f645ff27c-kube-api-access-9pcx8\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830640 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830655 4896 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830668 4896 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c290f80c-4e19-4618-9ec2-2bc47df395fd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830683 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wcl\" (UniqueName: \"kubernetes.io/projected/1bb8ed73-cf27-49c5-98cc-79e5a488f604-kube-api-access-97wcl\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830696 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830708 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830720 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nttsk\" (UniqueName: \"kubernetes.io/projected/40915453-ba7c-41e6-bc4f-3a221097ae62-kube-api-access-nttsk\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830731 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ppd\" (UniqueName: \"kubernetes.io/projected/c290f80c-4e19-4618-9ec2-2bc47df395fd-kube-api-access-c2ppd\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830743 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bb8ed73-cf27-49c5-98cc-79e5a488f604-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.830757 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52b5ef6-729b-4468-aa67-9a6f645ff27c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.899079 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40915453-ba7c-41e6-bc4f-3a221097ae62" (UID: "40915453-ba7c-41e6-bc4f-3a221097ae62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:45 crc kubenswrapper[4896]: I0126 15:40:45.931969 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40915453-ba7c-41e6-bc4f-3a221097ae62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.016236 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtg7d"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.032776 4896 generic.go:334] "Generic (PLEG): container finished" podID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerID="6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8" exitCode=0 Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.032839 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerDied","Data":"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.032867 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2v2b" event={"ID":"40915453-ba7c-41e6-bc4f-3a221097ae62","Type":"ContainerDied","Data":"ecdc6137a6237b61890db251cb61db9df5f135c059f51c490f54db8f412c3b78"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.032885 4896 scope.go:117] "RemoveContainer" containerID="6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.032987 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2v2b" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.037656 4896 generic.go:334] "Generic (PLEG): container finished" podID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerID="6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd" exitCode=0 Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.037724 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" event={"ID":"c290f80c-4e19-4618-9ec2-2bc47df395fd","Type":"ContainerDied","Data":"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.037753 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" event={"ID":"c290f80c-4e19-4618-9ec2-2bc47df395fd","Type":"ContainerDied","Data":"eae2350dc2dd1bf8c4e00894a01c161d275c9a66f4cf9939a3c4524db0a88fbb"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.037751 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p79qr" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.041448 4896 generic.go:334] "Generic (PLEG): container finished" podID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerID="f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3" exitCode=0 Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.041807 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerDied","Data":"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.042329 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4f48w" event={"ID":"b52b5ef6-729b-4468-aa67-9a6f645ff27c","Type":"ContainerDied","Data":"c78677971af83a1b5988d3a7f5ae10e86903483396fef083624bbd5d4d7e430c"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.041906 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4f48w" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.051921 4896 generic.go:334] "Generic (PLEG): container finished" podID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerID="946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad" exitCode=0 Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.052021 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerDied","Data":"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.052102 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5d62" event={"ID":"1bb8ed73-cf27-49c5-98cc-79e5a488f604","Type":"ContainerDied","Data":"41cb9d30aca5400934f8ee96882526d70536a06dcafa2bf3d4789c83af3f294d"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.052171 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5d62" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.061795 4896 generic.go:334] "Generic (PLEG): container finished" podID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" exitCode=0 Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.061857 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerDied","Data":"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.061892 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sldms" event={"ID":"d3369383-b89e-4cc5-8267-3f849ff0c294","Type":"ContainerDied","Data":"4f913973ff399161740e10bcab6c7d47bac7b3f93d239eec57b4109838ac1d74"} Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.061968 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sldms" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.062783 4896 scope.go:117] "RemoveContainer" containerID="fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.068523 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.074123 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2v2b"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.088657 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.091692 4896 scope.go:117] "RemoveContainer" containerID="9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.092358 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p79qr"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.098765 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.101736 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4f48w"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.116776 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.124260 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sldms"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.133097 4896 scope.go:117] "RemoveContainer" containerID="6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.134231 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8\": container with ID starting with 6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8 not found: ID does not exist" containerID="6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.134302 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8"} err="failed to get container status \"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8\": rpc error: code = NotFound desc = could not find container \"6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8\": container with ID starting with 6e2cbb57a0b0d8011d0da5939f6445f8bdc40baf65269f5687878e39a918bad8 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.134340 4896 scope.go:117] "RemoveContainer" containerID="fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.134739 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d\": container with ID starting with fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d not found: ID does not exist" containerID="fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.134770 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d"} err="failed to get container status \"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d\": rpc error: code = NotFound desc = could not find container \"fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d\": container with ID starting with fb5689daac7da3851d17272a3fb2ee92c71aef504c3d42bb16d17deff553cd6d not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.134787 4896 scope.go:117] "RemoveContainer" containerID="9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.135326 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.135542 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811\": container with ID starting with 9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811 not found: ID does not exist" containerID="9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.135589 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811"} err="failed to get container status \"9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811\": rpc error: code = NotFound desc = could not find container \"9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811\": container with ID starting with 9bb852f3432c944ab399f2654588da13a5f71e7d6c940601e0cf110054340811 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.135612 4896 scope.go:117] "RemoveContainer" containerID="6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.144972 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w5d62"] Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.154875 4896 scope.go:117] "RemoveContainer" containerID="6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.159038 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd\": container with ID starting with 6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd not found: ID does not exist" containerID="6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.159093 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd"} err="failed to get container status \"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd\": rpc error: code = NotFound desc = could not find container \"6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd\": container with ID starting with 6ef9d08c636adde6e294d934947d95445b21ae1bb3883adf532fbf03cd6689cd not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.159130 4896 scope.go:117] "RemoveContainer" containerID="f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.172401 4896 scope.go:117] "RemoveContainer" containerID="43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.199777 4896 scope.go:117] "RemoveContainer" containerID="c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.214492 4896 scope.go:117] "RemoveContainer" containerID="f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.215014 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3\": container with ID starting with f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3 not found: ID does not exist" containerID="f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.215048 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3"} err="failed to get container status \"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3\": rpc error: code = NotFound desc = could not find container \"f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3\": container with ID starting with f33c75cc0d24c70aafa735bba9bbc083135260738307e83e85cbc18761f2d4f3 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.215074 4896 scope.go:117] "RemoveContainer" containerID="43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.215543 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984\": container with ID starting with 43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984 not found: ID does not exist" containerID="43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.215564 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984"} err="failed to get container status \"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984\": rpc error: code = NotFound desc = could not find container \"43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984\": container with ID starting with 43c17cd7a3327e2e21b7b952d49d5363e47a955bbd0a0881ef9864a0ee193984 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.215596 4896 scope.go:117] "RemoveContainer" containerID="c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.216103 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67\": container with ID starting with c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67 not found: ID does not exist" containerID="c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.216144 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67"} err="failed to get container status \"c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67\": rpc error: code = NotFound desc = could not find container \"c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67\": container with ID starting with c0ef78515858059f4007055694286fe5d21095b29539f9e16503fe54bce3bd67 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.216189 4896 scope.go:117] "RemoveContainer" containerID="946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.229764 4896 scope.go:117] "RemoveContainer" containerID="1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.245269 4896 scope.go:117] "RemoveContainer" containerID="8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.311141 4896 scope.go:117] "RemoveContainer" containerID="946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.311913 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad\": container with ID starting with 946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad not found: ID does not exist" containerID="946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.311948 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad"} err="failed to get container status \"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad\": rpc error: code = NotFound desc = could not find container \"946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad\": container with ID starting with 946af466ede7334357a825dad54a76625a02d02dcf1abe8d5db9aea049ba98ad not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.311976 4896 scope.go:117] "RemoveContainer" containerID="1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.312294 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609\": container with ID starting with 1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609 not found: ID does not exist" containerID="1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.312321 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609"} err="failed to get container status \"1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609\": rpc error: code = NotFound desc = could not find container \"1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609\": container with ID starting with 1cdb7f43958e8781e4dfadf5ed9125a31b3af37c6fdd0657068d417c0a6ab609 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.312343 4896 scope.go:117] "RemoveContainer" containerID="8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.312897 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73\": container with ID starting with 8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73 not found: ID does not exist" containerID="8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.312920 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73"} err="failed to get container status \"8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73\": rpc error: code = NotFound desc = could not find container \"8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73\": container with ID starting with 8dd3f744f665ac6b2928dcc2cc74bc5d19e2028646ddcea6b96288ca99c0ee73 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.312937 4896 scope.go:117] "RemoveContainer" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.329200 4896 scope.go:117] "RemoveContainer" containerID="3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.346773 4896 scope.go:117] "RemoveContainer" containerID="9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.381695 4896 scope.go:117] "RemoveContainer" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.382135 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1\": container with ID starting with c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1 not found: ID does not exist" containerID="c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.382266 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1"} err="failed to get container status \"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1\": rpc error: code = NotFound desc = could not find container \"c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1\": container with ID starting with c02c45ba975f390f9d7acdb2c915632c20f3e943f5bec9183659a1951a6003c1 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.382408 4896 scope.go:117] "RemoveContainer" containerID="3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.383205 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997\": container with ID starting with 3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997 not found: ID does not exist" containerID="3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.383332 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997"} err="failed to get container status \"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997\": rpc error: code = NotFound desc = could not find container \"3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997\": container with ID starting with 3dd29645fd7aa764e0113ee194db724e21cc57693adff56990bddf0eb4934997 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.383440 4896 scope.go:117] "RemoveContainer" containerID="9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6" Jan 26 15:40:46 crc kubenswrapper[4896]: E0126 15:40:46.383855 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6\": container with ID starting with 9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6 not found: ID does not exist" containerID="9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.383979 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6"} err="failed to get container status \"9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6\": rpc error: code = NotFound desc = could not find container \"9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6\": container with ID starting with 9074cc17afe4c77bb3a37f20d649f96865bf151524eb765a052b53788631d7f6 not found: ID does not exist" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.767494 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" path="/var/lib/kubelet/pods/1bb8ed73-cf27-49c5-98cc-79e5a488f604/volumes" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.768369 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" path="/var/lib/kubelet/pods/40915453-ba7c-41e6-bc4f-3a221097ae62/volumes" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.769034 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" path="/var/lib/kubelet/pods/b52b5ef6-729b-4468-aa67-9a6f645ff27c/volumes" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.770235 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" path="/var/lib/kubelet/pods/c290f80c-4e19-4618-9ec2-2bc47df395fd/volumes" Jan 26 15:40:46 crc kubenswrapper[4896]: I0126 15:40:46.770713 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" path="/var/lib/kubelet/pods/d3369383-b89e-4cc5-8267-3f849ff0c294/volumes" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.074137 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" event={"ID":"22808cdf-7c01-491f-b3f4-d641898edf7b","Type":"ContainerStarted","Data":"a0e88443beaff9516133f3813adee3a13c0c17e33ab52876319e3a356eda9b3e"} Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.074839 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.074942 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" event={"ID":"22808cdf-7c01-491f-b3f4-d641898edf7b","Type":"ContainerStarted","Data":"fe5dcf8f489de384a587b31198b2505fa58f3ffcea2714e4b331af0a9d6b4ab8"} Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.077768 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.116776 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" podStartSLOduration=2.11675574 podStartE2EDuration="2.11675574s" podCreationTimestamp="2026-01-26 15:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:47.099702455 +0000 UTC m=+404.881582848" watchObservedRunningTime="2026-01-26 15:40:47.11675574 +0000 UTC m=+404.898636133" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332108 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332335 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332349 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332357 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332366 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332381 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332389 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332400 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332407 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332416 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332423 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332435 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332442 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332456 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332463 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332473 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332480 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332490 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332497 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332507 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332514 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="extract-content" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332525 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerName="marketplace-operator" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332532 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerName="marketplace-operator" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332542 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332549 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: E0126 15:40:47.332560 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332568 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="extract-utilities" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332685 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="40915453-ba7c-41e6-bc4f-3a221097ae62" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332701 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52b5ef6-729b-4468-aa67-9a6f645ff27c" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332709 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3369383-b89e-4cc5-8267-3f849ff0c294" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332720 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c290f80c-4e19-4618-9ec2-2bc47df395fd" containerName="marketplace-operator" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.332729 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb8ed73-cf27-49c5-98cc-79e5a488f604" containerName="registry-server" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.333678 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.335856 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.343714 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.350071 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.350140 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.350177 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z9q2\" (UniqueName: \"kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.451055 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z9q2\" (UniqueName: \"kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.451142 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.451441 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.451527 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.451962 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.470058 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z9q2\" (UniqueName: \"kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2\") pod \"redhat-marketplace-rw8kj\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.522929 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.523857 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.526438 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.534630 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.553008 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.553077 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.553192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mscqv\" (UniqueName: \"kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.654046 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.654251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.654315 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.654357 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mscqv\" (UniqueName: \"kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.654702 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.655058 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.674692 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mscqv\" (UniqueName: \"kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv\") pod \"certified-operators-6hnsk\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:47 crc kubenswrapper[4896]: I0126 15:40:47.843325 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.054848 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 15:40:48 crc kubenswrapper[4896]: W0126 15:40:48.060303 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc33533_266c_4be5_8b8a_314312fbf12c.slice/crio-91386ead0dc0ba565e3c2f0abdaa9c46133b86502d187f8fc8d33eab3d643ac7 WatchSource:0}: Error finding container 91386ead0dc0ba565e3c2f0abdaa9c46133b86502d187f8fc8d33eab3d643ac7: Status 404 returned error can't find the container with id 91386ead0dc0ba565e3c2f0abdaa9c46133b86502d187f8fc8d33eab3d643ac7 Jan 26 15:40:48 crc kubenswrapper[4896]: W0126 15:40:48.061337 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0925a6_a58f_4bf2_9d14_dbdc04e4d6a6.slice/crio-d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3 WatchSource:0}: Error finding container d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3: Status 404 returned error can't find the container with id d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3 Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.063265 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.089732 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerStarted","Data":"91386ead0dc0ba565e3c2f0abdaa9c46133b86502d187f8fc8d33eab3d643ac7"} Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.091348 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerStarted","Data":"d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3"} Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.813790 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:40:48 crc kubenswrapper[4896]: I0126 15:40:48.814148 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.097601 4896 generic.go:334] "Generic (PLEG): container finished" podID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerID="bda57c4830acfac1d49dac7c148ddc8094aed9118e79c85b987105f6559bd201" exitCode=0 Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.097659 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerDied","Data":"bda57c4830acfac1d49dac7c148ddc8094aed9118e79c85b987105f6559bd201"} Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.100610 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerDied","Data":"768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4"} Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.100601 4896 generic.go:334] "Generic (PLEG): container finished" podID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerID="768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4" exitCode=0 Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.725442 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ddlvl"] Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.727246 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.731239 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.734595 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddlvl"] Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.882700 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-utilities\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.882759 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgct\" (UniqueName: \"kubernetes.io/projected/3890206d-cbb0-4910-ab30-f4f9c66d28f8-kube-api-access-dzgct\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.882819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-catalog-content\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.922426 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hltsz"] Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.924915 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.927726 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.938382 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hltsz"] Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.984669 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-utilities\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.984713 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgct\" (UniqueName: \"kubernetes.io/projected/3890206d-cbb0-4910-ab30-f4f9c66d28f8-kube-api-access-dzgct\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.984763 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-catalog-content\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.985112 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-utilities\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:49 crc kubenswrapper[4896]: I0126 15:40:49.985161 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3890206d-cbb0-4910-ab30-f4f9c66d28f8-catalog-content\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.012271 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgct\" (UniqueName: \"kubernetes.io/projected/3890206d-cbb0-4910-ab30-f4f9c66d28f8-kube-api-access-dzgct\") pod \"community-operators-ddlvl\" (UID: \"3890206d-cbb0-4910-ab30-f4f9c66d28f8\") " pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.055279 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.086456 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-utilities\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.086517 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-catalog-content\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.086566 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9mc\" (UniqueName: \"kubernetes.io/projected/f38ce830-9bcb-49de-b024-23cb889289c0-kube-api-access-jn9mc\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.109212 4896 generic.go:334] "Generic (PLEG): container finished" podID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerID="ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa" exitCode=0 Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.109827 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerDied","Data":"ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa"} Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.116245 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerStarted","Data":"fef2e9e0f1b515f63b0944b839640aa089fbfc7585945412e60f6e9ea96ee846"} Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.191279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-utilities\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.191668 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-catalog-content\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.191717 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9mc\" (UniqueName: \"kubernetes.io/projected/f38ce830-9bcb-49de-b024-23cb889289c0-kube-api-access-jn9mc\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.192557 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-catalog-content\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.193337 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f38ce830-9bcb-49de-b024-23cb889289c0-utilities\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.209540 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9mc\" (UniqueName: \"kubernetes.io/projected/f38ce830-9bcb-49de-b024-23cb889289c0-kube-api-access-jn9mc\") pod \"redhat-operators-hltsz\" (UID: \"f38ce830-9bcb-49de-b024-23cb889289c0\") " pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.245570 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddlvl"] Jan 26 15:40:50 crc kubenswrapper[4896]: W0126 15:40:50.250877 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3890206d_cbb0_4910_ab30_f4f9c66d28f8.slice/crio-03c214936334e16012a0085fc64cb7498d1643b8dee9ffa7d68846b46d39642f WatchSource:0}: Error finding container 03c214936334e16012a0085fc64cb7498d1643b8dee9ffa7d68846b46d39642f: Status 404 returned error can't find the container with id 03c214936334e16012a0085fc64cb7498d1643b8dee9ffa7d68846b46d39642f Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.253012 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:40:50 crc kubenswrapper[4896]: I0126 15:40:50.649127 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hltsz"] Jan 26 15:40:50 crc kubenswrapper[4896]: W0126 15:40:50.660695 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf38ce830_9bcb_49de_b024_23cb889289c0.slice/crio-50c795420e4bdbd7271fd103a1cf7835fe9f1d0e9f787108be5c4a6ea0af493f WatchSource:0}: Error finding container 50c795420e4bdbd7271fd103a1cf7835fe9f1d0e9f787108be5c4a6ea0af493f: Status 404 returned error can't find the container with id 50c795420e4bdbd7271fd103a1cf7835fe9f1d0e9f787108be5c4a6ea0af493f Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.123321 4896 generic.go:334] "Generic (PLEG): container finished" podID="3890206d-cbb0-4910-ab30-f4f9c66d28f8" containerID="5d17afe6b9acd5a4108c7796bbd7010300974427efb7f72a1247b88c63362de5" exitCode=0 Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.123486 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddlvl" event={"ID":"3890206d-cbb0-4910-ab30-f4f9c66d28f8","Type":"ContainerDied","Data":"5d17afe6b9acd5a4108c7796bbd7010300974427efb7f72a1247b88c63362de5"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.124132 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddlvl" event={"ID":"3890206d-cbb0-4910-ab30-f4f9c66d28f8","Type":"ContainerStarted","Data":"03c214936334e16012a0085fc64cb7498d1643b8dee9ffa7d68846b46d39642f"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.129925 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerStarted","Data":"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.132272 4896 generic.go:334] "Generic (PLEG): container finished" podID="f38ce830-9bcb-49de-b024-23cb889289c0" containerID="62f713343731759eda87bb923cde88c6f4ec3f947e4ee47380b5d7f5653f11a9" exitCode=0 Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.132308 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hltsz" event={"ID":"f38ce830-9bcb-49de-b024-23cb889289c0","Type":"ContainerDied","Data":"62f713343731759eda87bb923cde88c6f4ec3f947e4ee47380b5d7f5653f11a9"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.132347 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hltsz" event={"ID":"f38ce830-9bcb-49de-b024-23cb889289c0","Type":"ContainerStarted","Data":"50c795420e4bdbd7271fd103a1cf7835fe9f1d0e9f787108be5c4a6ea0af493f"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.134462 4896 generic.go:334] "Generic (PLEG): container finished" podID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerID="fef2e9e0f1b515f63b0944b839640aa089fbfc7585945412e60f6e9ea96ee846" exitCode=0 Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.134489 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerDied","Data":"fef2e9e0f1b515f63b0944b839640aa089fbfc7585945412e60f6e9ea96ee846"} Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.195235 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rw8kj" podStartSLOduration=2.749257019 podStartE2EDuration="4.195213496s" podCreationTimestamp="2026-01-26 15:40:47 +0000 UTC" firstStartedPulling="2026-01-26 15:40:49.101817309 +0000 UTC m=+406.883697712" lastFinishedPulling="2026-01-26 15:40:50.547773786 +0000 UTC m=+408.329654189" observedRunningTime="2026-01-26 15:40:51.19454665 +0000 UTC m=+408.976427043" watchObservedRunningTime="2026-01-26 15:40:51.195213496 +0000 UTC m=+408.977093889" Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.542789 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" podUID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" containerName="registry" containerID="cri-o://78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9" gracePeriod=30 Jan 26 15:40:51 crc kubenswrapper[4896]: I0126 15:40:51.935489 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.117899 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118054 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118087 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118132 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfnvg\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118153 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118177 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118213 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.118283 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca\") pod \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\" (UID: \"8428c0c6-79c5-46d3-a6eb-5126303dfd60\") " Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.119146 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.119941 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.125402 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.129274 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.129433 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.129722 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg" (OuterVolumeSpecName: "kube-api-access-pfnvg") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "kube-api-access-pfnvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.132842 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.140612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddlvl" event={"ID":"3890206d-cbb0-4910-ab30-f4f9c66d28f8","Type":"ContainerStarted","Data":"21c9f5047ab488eecfdf82242fe174f1ea207926b32204d9083adacea306c65e"} Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142239 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8428c0c6-79c5-46d3-a6eb-5126303dfd60" (UID: "8428c0c6-79c5-46d3-a6eb-5126303dfd60"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142494 4896 generic.go:334] "Generic (PLEG): container finished" podID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" containerID="78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9" exitCode=0 Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142551 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142557 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" event={"ID":"8428c0c6-79c5-46d3-a6eb-5126303dfd60","Type":"ContainerDied","Data":"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9"} Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142598 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9sc6" event={"ID":"8428c0c6-79c5-46d3-a6eb-5126303dfd60","Type":"ContainerDied","Data":"c2dd15f36168c74e6671401efb0d6ad97c59939ea5a433b4cc8d5c10e0984f5e"} Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.142614 4896 scope.go:117] "RemoveContainer" containerID="78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.146023 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hltsz" event={"ID":"f38ce830-9bcb-49de-b024-23cb889289c0","Type":"ContainerStarted","Data":"e07356d90a05fef919f9a28e40d01bfc793604dae55ace3b43f954327381ba76"} Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.158469 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerStarted","Data":"2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c"} Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.164783 4896 scope.go:117] "RemoveContainer" containerID="78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9" Jan 26 15:40:52 crc kubenswrapper[4896]: E0126 15:40:52.166844 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9\": container with ID starting with 78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9 not found: ID does not exist" containerID="78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.166887 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9"} err="failed to get container status \"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9\": rpc error: code = NotFound desc = could not find container \"78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9\": container with ID starting with 78dd6196c216118c2d835e90dc09f5066f01123f2304f15729b0dacb86adc0c9 not found: ID does not exist" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.211337 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hnsk" podStartSLOduration=2.757410639 podStartE2EDuration="5.2113217s" podCreationTimestamp="2026-01-26 15:40:47 +0000 UTC" firstStartedPulling="2026-01-26 15:40:49.100296096 +0000 UTC m=+406.882176489" lastFinishedPulling="2026-01-26 15:40:51.554207157 +0000 UTC m=+409.336087550" observedRunningTime="2026-01-26 15:40:52.210130614 +0000 UTC m=+409.992010997" watchObservedRunningTime="2026-01-26 15:40:52.2113217 +0000 UTC m=+409.993202083" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.219439 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.219700 4896 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8428c0c6-79c5-46d3-a6eb-5126303dfd60-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.219807 4896 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.219900 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfnvg\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-kube-api-access-pfnvg\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.220000 4896 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8428c0c6-79c5-46d3-a6eb-5126303dfd60-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.220431 4896 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.220465 4896 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8428c0c6-79c5-46d3-a6eb-5126303dfd60-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.227135 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.231988 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9sc6"] Jan 26 15:40:52 crc kubenswrapper[4896]: I0126 15:40:52.766701 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" path="/var/lib/kubelet/pods/8428c0c6-79c5-46d3-a6eb-5126303dfd60/volumes" Jan 26 15:40:53 crc kubenswrapper[4896]: I0126 15:40:53.165386 4896 generic.go:334] "Generic (PLEG): container finished" podID="3890206d-cbb0-4910-ab30-f4f9c66d28f8" containerID="21c9f5047ab488eecfdf82242fe174f1ea207926b32204d9083adacea306c65e" exitCode=0 Jan 26 15:40:53 crc kubenswrapper[4896]: I0126 15:40:53.165490 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddlvl" event={"ID":"3890206d-cbb0-4910-ab30-f4f9c66d28f8","Type":"ContainerDied","Data":"21c9f5047ab488eecfdf82242fe174f1ea207926b32204d9083adacea306c65e"} Jan 26 15:40:53 crc kubenswrapper[4896]: I0126 15:40:53.177867 4896 generic.go:334] "Generic (PLEG): container finished" podID="f38ce830-9bcb-49de-b024-23cb889289c0" containerID="e07356d90a05fef919f9a28e40d01bfc793604dae55ace3b43f954327381ba76" exitCode=0 Jan 26 15:40:53 crc kubenswrapper[4896]: I0126 15:40:53.177956 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hltsz" event={"ID":"f38ce830-9bcb-49de-b024-23cb889289c0","Type":"ContainerDied","Data":"e07356d90a05fef919f9a28e40d01bfc793604dae55ace3b43f954327381ba76"} Jan 26 15:40:54 crc kubenswrapper[4896]: I0126 15:40:54.186573 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hltsz" event={"ID":"f38ce830-9bcb-49de-b024-23cb889289c0","Type":"ContainerStarted","Data":"073e17a32c068b9e4a26ec6e42face5bf09e40c22c0ccd6969b34e37f737cf2f"} Jan 26 15:40:54 crc kubenswrapper[4896]: I0126 15:40:54.189316 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddlvl" event={"ID":"3890206d-cbb0-4910-ab30-f4f9c66d28f8","Type":"ContainerStarted","Data":"00b4b6b12dc6439515634c401e88b0e6de55dd7698af2da2fb2c31a8340574a9"} Jan 26 15:40:54 crc kubenswrapper[4896]: I0126 15:40:54.204396 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hltsz" podStartSLOduration=2.658623058 podStartE2EDuration="5.204373944s" podCreationTimestamp="2026-01-26 15:40:49 +0000 UTC" firstStartedPulling="2026-01-26 15:40:51.133953695 +0000 UTC m=+408.915834088" lastFinishedPulling="2026-01-26 15:40:53.679704581 +0000 UTC m=+411.461584974" observedRunningTime="2026-01-26 15:40:54.201270856 +0000 UTC m=+411.983151259" watchObservedRunningTime="2026-01-26 15:40:54.204373944 +0000 UTC m=+411.986254337" Jan 26 15:40:54 crc kubenswrapper[4896]: I0126 15:40:54.217891 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ddlvl" podStartSLOduration=2.511322002 podStartE2EDuration="5.217873152s" podCreationTimestamp="2026-01-26 15:40:49 +0000 UTC" firstStartedPulling="2026-01-26 15:40:51.125687123 +0000 UTC m=+408.907567516" lastFinishedPulling="2026-01-26 15:40:53.832238273 +0000 UTC m=+411.614118666" observedRunningTime="2026-01-26 15:40:54.215664393 +0000 UTC m=+411.997544776" watchObservedRunningTime="2026-01-26 15:40:54.217873152 +0000 UTC m=+411.999753545" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.655058 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.656387 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.693697 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.844220 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.844272 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:57 crc kubenswrapper[4896]: I0126 15:40:57.886525 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:58 crc kubenswrapper[4896]: I0126 15:40:58.245470 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 15:40:58 crc kubenswrapper[4896]: I0126 15:40:58.245906 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.056673 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.057299 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.107167 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.251994 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ddlvl" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.254224 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.254836 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:41:00 crc kubenswrapper[4896]: I0126 15:41:00.313691 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:41:01 crc kubenswrapper[4896]: I0126 15:41:01.261628 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hltsz" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.812500 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s"] Jan 26 15:41:17 crc kubenswrapper[4896]: E0126 15:41:17.813249 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" containerName="registry" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.813264 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" containerName="registry" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.813396 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8428c0c6-79c5-46d3-a6eb-5126303dfd60" containerName="registry" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.813868 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.816393 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.816700 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.816871 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.817155 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.817172 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.826281 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s"] Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.973374 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e758b2d0-7934-4dc6-9076-976acd333b41-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.973436 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw24w\" (UniqueName: \"kubernetes.io/projected/e758b2d0-7934-4dc6-9076-976acd333b41-kube-api-access-pw24w\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:17 crc kubenswrapper[4896]: I0126 15:41:17.973514 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e758b2d0-7934-4dc6-9076-976acd333b41-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.074857 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e758b2d0-7934-4dc6-9076-976acd333b41-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.074944 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw24w\" (UniqueName: \"kubernetes.io/projected/e758b2d0-7934-4dc6-9076-976acd333b41-kube-api-access-pw24w\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.074994 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e758b2d0-7934-4dc6-9076-976acd333b41-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.075984 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e758b2d0-7934-4dc6-9076-976acd333b41-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.081957 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e758b2d0-7934-4dc6-9076-976acd333b41-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.096187 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw24w\" (UniqueName: \"kubernetes.io/projected/e758b2d0-7934-4dc6-9076-976acd333b41-kube-api-access-pw24w\") pod \"cluster-monitoring-operator-6d5b84845-9cx8s\" (UID: \"e758b2d0-7934-4dc6-9076-976acd333b41\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.131547 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.540613 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s"] Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.813900 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.813976 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.814037 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.815120 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:41:18 crc kubenswrapper[4896]: I0126 15:41:18.815305 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236" gracePeriod=600 Jan 26 15:41:19 crc kubenswrapper[4896]: I0126 15:41:19.317034 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236" exitCode=0 Jan 26 15:41:19 crc kubenswrapper[4896]: I0126 15:41:19.317164 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236"} Jan 26 15:41:19 crc kubenswrapper[4896]: I0126 15:41:19.317521 4896 scope.go:117] "RemoveContainer" containerID="8fed1d8bacfa3bfc8b5c910ea870d72978016ab308a31c95d7f0e6d92321c939" Jan 26 15:41:19 crc kubenswrapper[4896]: I0126 15:41:19.319044 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" event={"ID":"e758b2d0-7934-4dc6-9076-976acd333b41","Type":"ContainerStarted","Data":"56df5c829c6af02204d520c4c318011a59cccf5665f1107f4eb59f6b4aa77fe5"} Jan 26 15:41:20 crc kubenswrapper[4896]: I0126 15:41:20.328379 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1"} Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.341714 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" event={"ID":"e758b2d0-7934-4dc6-9076-976acd333b41","Type":"ContainerStarted","Data":"216675d345ecd5d6e2064476604a2698abf369c9f86200c79bab6ca6c3725525"} Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.358994 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2"] Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.359732 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.365773 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.365907 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-frl8s" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.368387 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9cx8s" podStartSLOduration=2.207884414 podStartE2EDuration="4.368372569s" podCreationTimestamp="2026-01-26 15:41:17 +0000 UTC" firstStartedPulling="2026-01-26 15:41:18.550738002 +0000 UTC m=+436.332618395" lastFinishedPulling="2026-01-26 15:41:20.711226157 +0000 UTC m=+438.493106550" observedRunningTime="2026-01-26 15:41:21.365350593 +0000 UTC m=+439.147230986" watchObservedRunningTime="2026-01-26 15:41:21.368372569 +0000 UTC m=+439.150252962" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.369019 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2"] Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.522178 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a31060d2-4489-4f9d-b628-9b0f37f5c0ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-z74l2\" (UID: \"a31060d2-4489-4f9d-b628-9b0f37f5c0ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.623795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a31060d2-4489-4f9d-b628-9b0f37f5c0ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-z74l2\" (UID: \"a31060d2-4489-4f9d-b628-9b0f37f5c0ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.636456 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a31060d2-4489-4f9d-b628-9b0f37f5c0ff-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-z74l2\" (UID: \"a31060d2-4489-4f9d-b628-9b0f37f5c0ff\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:21 crc kubenswrapper[4896]: I0126 15:41:21.681809 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:22 crc kubenswrapper[4896]: I0126 15:41:22.061035 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2"] Jan 26 15:41:22 crc kubenswrapper[4896]: W0126 15:41:22.064121 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda31060d2_4489_4f9d_b628_9b0f37f5c0ff.slice/crio-05c95523b6152b507f4e51fa1220c0d049270b7c09532acef6c95e43ce50c751 WatchSource:0}: Error finding container 05c95523b6152b507f4e51fa1220c0d049270b7c09532acef6c95e43ce50c751: Status 404 returned error can't find the container with id 05c95523b6152b507f4e51fa1220c0d049270b7c09532acef6c95e43ce50c751 Jan 26 15:41:22 crc kubenswrapper[4896]: I0126 15:41:22.348149 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" event={"ID":"a31060d2-4489-4f9d-b628-9b0f37f5c0ff","Type":"ContainerStarted","Data":"05c95523b6152b507f4e51fa1220c0d049270b7c09532acef6c95e43ce50c751"} Jan 26 15:41:25 crc kubenswrapper[4896]: I0126 15:41:25.367893 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" event={"ID":"a31060d2-4489-4f9d-b628-9b0f37f5c0ff","Type":"ContainerStarted","Data":"4b7ae3a00ac6de93f5fff1abbdd92152bd1126b182769e3978735070cfc5f359"} Jan 26 15:41:25 crc kubenswrapper[4896]: I0126 15:41:25.368540 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:25 crc kubenswrapper[4896]: I0126 15:41:25.375249 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" Jan 26 15:41:25 crc kubenswrapper[4896]: I0126 15:41:25.384338 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-z74l2" podStartSLOduration=1.931597751 podStartE2EDuration="4.384298946s" podCreationTimestamp="2026-01-26 15:41:21 +0000 UTC" firstStartedPulling="2026-01-26 15:41:22.06715348 +0000 UTC m=+439.849033873" lastFinishedPulling="2026-01-26 15:41:24.519854675 +0000 UTC m=+442.301735068" observedRunningTime="2026-01-26 15:41:25.382311232 +0000 UTC m=+443.164191625" watchObservedRunningTime="2026-01-26 15:41:25.384298946 +0000 UTC m=+443.166179339" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.420386 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gmk75"] Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.421372 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.424340 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.424671 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-snfzv" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.425810 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.426003 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.466784 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gmk75"] Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.584519 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.584664 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vdp\" (UniqueName: \"kubernetes.io/projected/e128267f-c702-4e43-99fe-416dbb997a15-kube-api-access-h5vdp\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.584708 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e128267f-c702-4e43-99fe-416dbb997a15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.584751 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.686181 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.686250 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vdp\" (UniqueName: \"kubernetes.io/projected/e128267f-c702-4e43-99fe-416dbb997a15-kube-api-access-h5vdp\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.686282 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e128267f-c702-4e43-99fe-416dbb997a15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.686318 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.687334 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e128267f-c702-4e43-99fe-416dbb997a15-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.692287 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.692287 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e128267f-c702-4e43-99fe-416dbb997a15-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.706200 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5vdp\" (UniqueName: \"kubernetes.io/projected/e128267f-c702-4e43-99fe-416dbb997a15-kube-api-access-h5vdp\") pod \"prometheus-operator-db54df47d-gmk75\" (UID: \"e128267f-c702-4e43-99fe-416dbb997a15\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:26 crc kubenswrapper[4896]: I0126 15:41:26.737124 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" Jan 26 15:41:27 crc kubenswrapper[4896]: I0126 15:41:27.174496 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gmk75"] Jan 26 15:41:27 crc kubenswrapper[4896]: W0126 15:41:27.181251 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode128267f_c702_4e43_99fe_416dbb997a15.slice/crio-7d4c8572564ff4e0fd32c836a4cc2110c641a813882d6900b0d36acb1d8914c9 WatchSource:0}: Error finding container 7d4c8572564ff4e0fd32c836a4cc2110c641a813882d6900b0d36acb1d8914c9: Status 404 returned error can't find the container with id 7d4c8572564ff4e0fd32c836a4cc2110c641a813882d6900b0d36acb1d8914c9 Jan 26 15:41:27 crc kubenswrapper[4896]: I0126 15:41:27.381226 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" event={"ID":"e128267f-c702-4e43-99fe-416dbb997a15","Type":"ContainerStarted","Data":"7d4c8572564ff4e0fd32c836a4cc2110c641a813882d6900b0d36acb1d8914c9"} Jan 26 15:41:30 crc kubenswrapper[4896]: I0126 15:41:30.396116 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" event={"ID":"e128267f-c702-4e43-99fe-416dbb997a15","Type":"ContainerStarted","Data":"6cb5da8db01bf0fb1a793b9bd418111c1c19f88722a998c22cca68d7890a81ce"} Jan 26 15:41:30 crc kubenswrapper[4896]: I0126 15:41:30.396719 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" event={"ID":"e128267f-c702-4e43-99fe-416dbb997a15","Type":"ContainerStarted","Data":"7936200ce87e00fca64ee178df8874892ab757ee66d38fe3bf044aa61c5ff3f6"} Jan 26 15:41:30 crc kubenswrapper[4896]: I0126 15:41:30.412978 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-gmk75" podStartSLOduration=2.274136196 podStartE2EDuration="4.412961853s" podCreationTimestamp="2026-01-26 15:41:26 +0000 UTC" firstStartedPulling="2026-01-26 15:41:27.183051929 +0000 UTC m=+444.964932322" lastFinishedPulling="2026-01-26 15:41:29.321877586 +0000 UTC m=+447.103757979" observedRunningTime="2026-01-26 15:41:30.41013502 +0000 UTC m=+448.192015413" watchObservedRunningTime="2026-01-26 15:41:30.412961853 +0000 UTC m=+448.194842246" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.869132 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7"] Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.871127 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.873419 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-g5sht" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.874271 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.874785 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.884772 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7"] Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.898497 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt"] Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.899786 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.904011 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.904236 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.904355 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-fs994" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.904501 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.915258 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt"] Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.944854 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-8rwmv"] Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.946079 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.948427 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.948458 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.950319 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-zk5vk" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.973227 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7a278b5-2fdb-43c6-bd13-c77d185db06f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.973284 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxmn\" (UniqueName: \"kubernetes.io/projected/a7a278b5-2fdb-43c6-bd13-c77d185db06f-kube-api-access-xwxmn\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.973341 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:32 crc kubenswrapper[4896]: I0126 15:41:32.973369 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.074971 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075041 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m2x\" (UniqueName: \"kubernetes.io/projected/e670540b-8b02-44e9-859d-3d792b5e4fda-kube-api-access-v7m2x\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075074 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075101 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075128 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-textfile\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075155 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075186 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-tls\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075288 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/14531d98-96ef-4629-9f9f-4797c4480849-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075317 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-root\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075338 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-sys\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075367 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7a278b5-2fdb-43c6-bd13-c77d185db06f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075445 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075477 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-wtmp\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075510 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxmn\" (UniqueName: \"kubernetes.io/projected/a7a278b5-2fdb-43c6-bd13-c77d185db06f-kube-api-access-xwxmn\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075543 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075563 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxbj\" (UniqueName: \"kubernetes.io/projected/14531d98-96ef-4629-9f9f-4797c4480849-kube-api-access-6zxbj\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075610 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.075650 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e670540b-8b02-44e9-859d-3d792b5e4fda-metrics-client-ca\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.076977 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7a278b5-2fdb-43c6-bd13-c77d185db06f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.084571 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.089573 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7a278b5-2fdb-43c6-bd13-c77d185db06f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.092549 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxmn\" (UniqueName: \"kubernetes.io/projected/a7a278b5-2fdb-43c6-bd13-c77d185db06f-kube-api-access-xwxmn\") pod \"openshift-state-metrics-566fddb674-wvxb7\" (UID: \"a7a278b5-2fdb-43c6-bd13-c77d185db06f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176679 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-root\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176758 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-sys\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176822 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-wtmp\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176838 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-root\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176854 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176977 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxbj\" (UniqueName: \"kubernetes.io/projected/14531d98-96ef-4629-9f9f-4797c4480849-kube-api-access-6zxbj\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.176923 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-sys\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177032 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177089 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-wtmp\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177134 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e670540b-8b02-44e9-859d-3d792b5e4fda-metrics-client-ca\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177323 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7m2x\" (UniqueName: \"kubernetes.io/projected/e670540b-8b02-44e9-859d-3d792b5e4fda-kube-api-access-v7m2x\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177367 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177426 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-textfile\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177486 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177536 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-tls\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177624 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/14531d98-96ef-4629-9f9f-4797c4480849-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.177836 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.178012 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.178081 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-textfile\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.178086 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/14531d98-96ef-4629-9f9f-4797c4480849-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.178414 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e670540b-8b02-44e9-859d-3d792b5e4fda-metrics-client-ca\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.181380 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-tls\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.185332 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e670540b-8b02-44e9-859d-3d792b5e4fda-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.185927 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.186802 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/14531d98-96ef-4629-9f9f-4797c4480849-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.189969 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.198270 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxbj\" (UniqueName: \"kubernetes.io/projected/14531d98-96ef-4629-9f9f-4797c4480849-kube-api-access-6zxbj\") pod \"kube-state-metrics-777cb5bd5d-x9wnt\" (UID: \"14531d98-96ef-4629-9f9f-4797c4480849\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.200345 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7m2x\" (UniqueName: \"kubernetes.io/projected/e670540b-8b02-44e9-859d-3d792b5e4fda-kube-api-access-v7m2x\") pod \"node-exporter-8rwmv\" (UID: \"e670540b-8b02-44e9-859d-3d792b5e4fda\") " pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.214416 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.258021 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8rwmv" Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.399835 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7"] Jan 26 15:41:33 crc kubenswrapper[4896]: W0126 15:41:33.410886 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7a278b5_2fdb_43c6_bd13_c77d185db06f.slice/crio-bde8f4a2c7454111d3594180f1350c1f665c5e40e68a9838112b0ceb89d65cce WatchSource:0}: Error finding container bde8f4a2c7454111d3594180f1350c1f665c5e40e68a9838112b0ceb89d65cce: Status 404 returned error can't find the container with id bde8f4a2c7454111d3594180f1350c1f665c5e40e68a9838112b0ceb89d65cce Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.415930 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8rwmv" event={"ID":"e670540b-8b02-44e9-859d-3d792b5e4fda","Type":"ContainerStarted","Data":"50547004ab43fcc13310548bfd64202d0d1e79c6e41a28ed93e6b24efcd8aa12"} Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.700754 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt"] Jan 26 15:41:33 crc kubenswrapper[4896]: W0126 15:41:33.706014 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14531d98_96ef_4629_9f9f_4797c4480849.slice/crio-850cb36f25d2a0d97713bc3f07bb5a1dac8db3a51bb84f58168946eb894c1023 WatchSource:0}: Error finding container 850cb36f25d2a0d97713bc3f07bb5a1dac8db3a51bb84f58168946eb894c1023: Status 404 returned error can't find the container with id 850cb36f25d2a0d97713bc3f07bb5a1dac8db3a51bb84f58168946eb894c1023 Jan 26 15:41:33 crc kubenswrapper[4896]: I0126 15:41:33.998982 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.001040 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.003593 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004021 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004231 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004363 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004446 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-9jntg" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004745 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.004881 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.009385 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.014315 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.031312 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091304 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-web-config\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091516 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfhf\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-kube-api-access-9zfhf\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091653 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-volume\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091837 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091888 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091926 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.091984 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.092098 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.092160 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.092225 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-out\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.092275 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.092296 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193055 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193128 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193163 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-out\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193185 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193201 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193223 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-web-config\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193241 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfhf\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-kube-api-access-9zfhf\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193266 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-volume\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193290 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193308 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193325 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.193346 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.195238 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.195670 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.195695 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8fa08097-ed36-4af5-8e5b-1121ef06a34f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.199479 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-web-config\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.199513 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.199720 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.199912 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-volume\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.200047 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.200262 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.200718 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8fa08097-ed36-4af5-8e5b-1121ef06a34f-config-out\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.219357 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfhf\" (UniqueName: \"kubernetes.io/projected/8fa08097-ed36-4af5-8e5b-1121ef06a34f-kube-api-access-9zfhf\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.221049 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8fa08097-ed36-4af5-8e5b-1121ef06a34f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8fa08097-ed36-4af5-8e5b-1121ef06a34f\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.322000 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.442128 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" event={"ID":"14531d98-96ef-4629-9f9f-4797c4480849","Type":"ContainerStarted","Data":"850cb36f25d2a0d97713bc3f07bb5a1dac8db3a51bb84f58168946eb894c1023"} Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.449276 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" event={"ID":"a7a278b5-2fdb-43c6-bd13-c77d185db06f","Type":"ContainerStarted","Data":"fd6928a1933527d3d1d2631382f880ef7a0c3d2b40ec4014604c756b5e699253"} Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.449319 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" event={"ID":"a7a278b5-2fdb-43c6-bd13-c77d185db06f","Type":"ContainerStarted","Data":"952b8fd2609b658b2617a5ed2924f8f5773b73fab400a2ffbee0f73547a5afd2"} Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.449336 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" event={"ID":"a7a278b5-2fdb-43c6-bd13-c77d185db06f","Type":"ContainerStarted","Data":"bde8f4a2c7454111d3594180f1350c1f665c5e40e68a9838112b0ceb89d65cce"} Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.776186 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 15:41:34 crc kubenswrapper[4896]: W0126 15:41:34.777020 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fa08097_ed36_4af5_8e5b_1121ef06a34f.slice/crio-a1189f6dd20f9d92acb370a0c80b9fbb4f61b8cfa81bf0f9562aed500969d9ce WatchSource:0}: Error finding container a1189f6dd20f9d92acb370a0c80b9fbb4f61b8cfa81bf0f9562aed500969d9ce: Status 404 returned error can't find the container with id a1189f6dd20f9d92acb370a0c80b9fbb4f61b8cfa81bf0f9562aed500969d9ce Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.989107 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c5586d8c9-f4qc2"] Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.991192 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.994173 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-mqqqv" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.994687 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.994828 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.995656 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.997283 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.997314 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-e5npdkomf9q3h" Jan 26 15:41:34 crc kubenswrapper[4896]: I0126 15:41:34.998056 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.005766 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-metrics-client-ca\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.005963 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006061 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006160 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006231 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5vx\" (UniqueName: \"kubernetes.io/projected/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-kube-api-access-db5vx\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006442 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006571 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-grpc-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.006658 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.007840 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c5586d8c9-f4qc2"] Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.108972 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109071 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109113 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109138 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db5vx\" (UniqueName: \"kubernetes.io/projected/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-kube-api-access-db5vx\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109213 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109240 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109263 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-grpc-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.109301 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-metrics-client-ca\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.110936 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-metrics-client-ca\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.115492 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.116318 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.117248 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.117873 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.117946 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-thanos-querier-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.124467 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-secret-grpc-tls\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.128292 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db5vx\" (UniqueName: \"kubernetes.io/projected/5ac5577e-a45b-4e15-aa54-d3bd9c8ca092-kube-api-access-db5vx\") pod \"thanos-querier-c5586d8c9-f4qc2\" (UID: \"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092\") " pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.360622 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:35 crc kubenswrapper[4896]: I0126 15:41:35.460914 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"a1189f6dd20f9d92acb370a0c80b9fbb4f61b8cfa81bf0f9562aed500969d9ce"} Jan 26 15:41:36 crc kubenswrapper[4896]: I0126 15:41:36.608093 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c5586d8c9-f4qc2"] Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.478719 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8rwmv" event={"ID":"e670540b-8b02-44e9-859d-3d792b5e4fda","Type":"ContainerStarted","Data":"fde4403233e271124ad572023d78da8fbf49466a21014c34c260750a5c7af841"} Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.480780 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" event={"ID":"a7a278b5-2fdb-43c6-bd13-c77d185db06f","Type":"ContainerStarted","Data":"4c0f55fb3a90b3fe36f2432e94b5570e168caedbe208c2e76403aa0754b606a8"} Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.481886 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"5023bba92dddef1f5ea37eb10f8851767978b384ce71edded8211dc6335b820a"} Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.483892 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" event={"ID":"14531d98-96ef-4629-9f9f-4797c4480849","Type":"ContainerStarted","Data":"9aaa2c9ffa1f573b35cb1b3d63ad110c05d04a5b0b7d89803a29c4a669bb4862"} Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.485363 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"0fcccc924a6a3e63c015de39525d41d41ed239db8221baa431949dac2658b934"} Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.520866 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-wvxb7" podStartSLOduration=2.8003392959999998 podStartE2EDuration="5.520845453s" podCreationTimestamp="2026-01-26 15:41:32 +0000 UTC" firstStartedPulling="2026-01-26 15:41:34.130904932 +0000 UTC m=+451.912785335" lastFinishedPulling="2026-01-26 15:41:36.851411099 +0000 UTC m=+454.633291492" observedRunningTime="2026-01-26 15:41:37.520180258 +0000 UTC m=+455.302060671" watchObservedRunningTime="2026-01-26 15:41:37.520845453 +0000 UTC m=+455.302725846" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.732335 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.733299 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.764568 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961402 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961508 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961611 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961633 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmdx\" (UniqueName: \"kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961680 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961735 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:37 crc kubenswrapper[4896]: I0126 15:41:37.961775 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.062895 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmdx\" (UniqueName: \"kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063328 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063374 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063401 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063460 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.063491 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.064418 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.064463 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.064493 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.064625 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.069823 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.070167 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.078295 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-b94dd49c-f92bj"] Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.079257 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.081693 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.082097 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.082328 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.082430 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.082555 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mhx6s" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.085028 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c8g6qj4sktq0j" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.096459 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmdx\" (UniqueName: \"kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx\") pod \"console-5555c78d9f-4chpf\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.097299 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-b94dd49c-f92bj"] Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266110 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-server-tls\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266233 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwghn\" (UniqueName: \"kubernetes.io/projected/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-kube-api-access-kwghn\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266272 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-metrics-server-audit-profiles\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266302 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-client-certs\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266333 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-client-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266870 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.266911 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-audit-log\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.359152 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367548 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-metrics-server-audit-profiles\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367613 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-client-certs\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367649 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-client-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367679 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367719 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-audit-log\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-server-tls\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.367849 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwghn\" (UniqueName: \"kubernetes.io/projected/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-kube-api-access-kwghn\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.368296 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-audit-log\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.368776 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.369112 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-metrics-server-audit-profiles\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.372274 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-client-certs\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.372893 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-secret-metrics-server-tls\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.373122 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-client-ca-bundle\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.385387 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwghn\" (UniqueName: \"kubernetes.io/projected/1672fa36-cd09-47c9-bb88-ab33ef7e7e66-kube-api-access-kwghn\") pod \"metrics-server-b94dd49c-f92bj\" (UID: \"1672fa36-cd09-47c9-bb88-ab33ef7e7e66\") " pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.396877 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.504006 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" event={"ID":"14531d98-96ef-4629-9f9f-4797c4480849","Type":"ContainerStarted","Data":"808071355badbfc5d8e5a5b503c3edbbf03345a709136e17b327c45c4b1e2a35"} Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.504397 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" event={"ID":"14531d98-96ef-4629-9f9f-4797c4480849","Type":"ContainerStarted","Data":"5f8140fc513e2d47e9b6ebba4c32dbc5c45227e429d66f8ec88e72732509ac80"} Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.509655 4896 generic.go:334] "Generic (PLEG): container finished" podID="8fa08097-ed36-4af5-8e5b-1121ef06a34f" containerID="0fcccc924a6a3e63c015de39525d41d41ed239db8221baa431949dac2658b934" exitCode=0 Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.509774 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerDied","Data":"0fcccc924a6a3e63c015de39525d41d41ed239db8221baa431949dac2658b934"} Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.518240 4896 generic.go:334] "Generic (PLEG): container finished" podID="e670540b-8b02-44e9-859d-3d792b5e4fda" containerID="fde4403233e271124ad572023d78da8fbf49466a21014c34c260750a5c7af841" exitCode=0 Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.518358 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8rwmv" event={"ID":"e670540b-8b02-44e9-859d-3d792b5e4fda","Type":"ContainerDied","Data":"fde4403233e271124ad572023d78da8fbf49466a21014c34c260750a5c7af841"} Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.527515 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-x9wnt" podStartSLOduration=3.382645233 podStartE2EDuration="6.527494692s" podCreationTimestamp="2026-01-26 15:41:32 +0000 UTC" firstStartedPulling="2026-01-26 15:41:33.708541344 +0000 UTC m=+451.490421737" lastFinishedPulling="2026-01-26 15:41:36.853390803 +0000 UTC m=+454.635271196" observedRunningTime="2026-01-26 15:41:38.524345298 +0000 UTC m=+456.306225691" watchObservedRunningTime="2026-01-26 15:41:38.527494692 +0000 UTC m=+456.309375105" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.686494 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl"] Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.693433 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.698182 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl"] Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.698706 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.699116 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.779525 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e38b1ee1-d5e1-470c-a326-90f20b2f9050-monitoring-plugin-cert\") pod \"monitoring-plugin-6779cbc9cd-ptgtl\" (UID: \"e38b1ee1-d5e1-470c-a326-90f20b2f9050\") " pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.854215 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:41:38 crc kubenswrapper[4896]: W0126 15:41:38.857674 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c22b97c_9abb_4b25_9831_daea2ae48af0.slice/crio-b6720c02a2c2c46f1da48d280ffffb82a114e41610057701d785d894aabc28e6 WatchSource:0}: Error finding container b6720c02a2c2c46f1da48d280ffffb82a114e41610057701d785d894aabc28e6: Status 404 returned error can't find the container with id b6720c02a2c2c46f1da48d280ffffb82a114e41610057701d785d894aabc28e6 Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.884817 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e38b1ee1-d5e1-470c-a326-90f20b2f9050-monitoring-plugin-cert\") pod \"monitoring-plugin-6779cbc9cd-ptgtl\" (UID: \"e38b1ee1-d5e1-470c-a326-90f20b2f9050\") " pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.891426 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e38b1ee1-d5e1-470c-a326-90f20b2f9050-monitoring-plugin-cert\") pod \"monitoring-plugin-6779cbc9cd-ptgtl\" (UID: \"e38b1ee1-d5e1-470c-a326-90f20b2f9050\") " pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:38 crc kubenswrapper[4896]: I0126 15:41:38.987142 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-b94dd49c-f92bj"] Jan 26 15:41:38 crc kubenswrapper[4896]: W0126 15:41:38.999333 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1672fa36_cd09_47c9_bb88_ab33ef7e7e66.slice/crio-62e33a816da548d5872cee45f451043bb00050174017f7cddcfa1a0b39ff00d5 WatchSource:0}: Error finding container 62e33a816da548d5872cee45f451043bb00050174017f7cddcfa1a0b39ff00d5: Status 404 returned error can't find the container with id 62e33a816da548d5872cee45f451043bb00050174017f7cddcfa1a0b39ff00d5 Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.023972 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.375079 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.377256 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.382515 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.382685 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.384556 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.384726 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.384816 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.385015 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.384736 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.385289 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.385884 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-l4p2m" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.385922 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-2g0vur4oo4h92" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.389520 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.392388 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.392998 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394491 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394532 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-config-out\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394571 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394613 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qks7x\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-kube-api-access-qks7x\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394631 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394662 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394676 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394691 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394709 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-web-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394727 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394761 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394777 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394792 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394811 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394826 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394846 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.394864 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.398294 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.495910 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.495963 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qks7x\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-kube-api-access-qks7x\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.495995 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496040 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496062 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496084 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496107 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-web-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496130 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496160 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496188 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496213 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496237 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496259 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496280 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496310 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496339 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496377 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.496413 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-config-out\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.498299 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.499670 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.499710 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.500651 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.501339 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.501543 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.502289 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.502407 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2047fd01-a581-40bc-9865-476ac9d1fae6-config-out\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.502862 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.503021 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.503597 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.504028 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.505049 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.506385 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.513172 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-web-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.514156 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2047fd01-a581-40bc-9865-476ac9d1fae6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.517385 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2047fd01-a581-40bc-9865-476ac9d1fae6-config\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.518440 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qks7x\" (UniqueName: \"kubernetes.io/projected/2047fd01-a581-40bc-9865-476ac9d1fae6-kube-api-access-qks7x\") pod \"prometheus-k8s-0\" (UID: \"2047fd01-a581-40bc-9865-476ac9d1fae6\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.526949 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8rwmv" event={"ID":"e670540b-8b02-44e9-859d-3d792b5e4fda","Type":"ContainerStarted","Data":"83957085ab3a57d65506381bf04e3a5596703ccb9c382d95c74a40f8a9f0fa24"} Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.527171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8rwmv" event={"ID":"e670540b-8b02-44e9-859d-3d792b5e4fda","Type":"ContainerStarted","Data":"c0d0aca82cd692882845f43f152e0eebe7e022ce1885d6f0d3769ed31e695679"} Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.528678 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5555c78d9f-4chpf" event={"ID":"2c22b97c-9abb-4b25-9831-daea2ae48af0","Type":"ContainerStarted","Data":"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f"} Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.528768 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5555c78d9f-4chpf" event={"ID":"2c22b97c-9abb-4b25-9831-daea2ae48af0","Type":"ContainerStarted","Data":"b6720c02a2c2c46f1da48d280ffffb82a114e41610057701d785d894aabc28e6"} Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.529920 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" event={"ID":"1672fa36-cd09-47c9-bb88-ab33ef7e7e66","Type":"ContainerStarted","Data":"62e33a816da548d5872cee45f451043bb00050174017f7cddcfa1a0b39ff00d5"} Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.638103 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-8rwmv" podStartSLOduration=4.091414263 podStartE2EDuration="7.638081279s" podCreationTimestamp="2026-01-26 15:41:32 +0000 UTC" firstStartedPulling="2026-01-26 15:41:33.305182814 +0000 UTC m=+451.087063207" lastFinishedPulling="2026-01-26 15:41:36.85184982 +0000 UTC m=+454.633730223" observedRunningTime="2026-01-26 15:41:39.585655092 +0000 UTC m=+457.367535505" watchObservedRunningTime="2026-01-26 15:41:39.638081279 +0000 UTC m=+457.419961672" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.640254 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5555c78d9f-4chpf" podStartSLOduration=2.64024239 podStartE2EDuration="2.64024239s" podCreationTimestamp="2026-01-26 15:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:41:39.634485314 +0000 UTC m=+457.416365727" watchObservedRunningTime="2026-01-26 15:41:39.64024239 +0000 UTC m=+457.422122783" Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.647242 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl"] Jan 26 15:41:39 crc kubenswrapper[4896]: I0126 15:41:39.699791 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:40 crc kubenswrapper[4896]: I0126 15:41:40.574878 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" event={"ID":"e38b1ee1-d5e1-470c-a326-90f20b2f9050","Type":"ContainerStarted","Data":"fa63033c78c66d11a5b91d6b4f5f6b3b95749d19fa419abcc595d63fe1059858"} Jan 26 15:41:40 crc kubenswrapper[4896]: I0126 15:41:40.863209 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.585728 4896 generic.go:334] "Generic (PLEG): container finished" podID="2047fd01-a581-40bc-9865-476ac9d1fae6" containerID="d56db77cc8d8ea6f0504b23d1a935dfc46e798ef8ba236e22060c4c53d4a17fb" exitCode=0 Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.587153 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerDied","Data":"d56db77cc8d8ea6f0504b23d1a935dfc46e798ef8ba236e22060c4c53d4a17fb"} Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.587188 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"a4a4efec83ec5dddd6ebc6739144b5e5dc144a05d72687db60bba1c9f79ffc7c"} Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.593838 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"76b7c01b8b2e5503aa4ae45867e512c536b89fe9ac8c4524c8c4862c2de46073"} Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.593881 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"31794f3939ed277d5829c64ccdb5a853801148a4e461cb6a10b5d0efd574acad"} Jan 26 15:41:41 crc kubenswrapper[4896]: I0126 15:41:41.593895 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"ef76e8e9d43d5ee328592037027f08aca4b00d57e1194916858aa9df82aff2c0"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.617825 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"0e16595402285335dd0eeb0c9025bc06d1c65ad1afac7cb3d6a3737608f2a9bb"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.618355 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"365d2122cb93eab8d920cf666d2f410d9c38508b14e2fe7e7bf7eaf35c846f52"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.618368 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" event={"ID":"5ac5577e-a45b-4e15-aa54-d3bd9c8ca092","Type":"ContainerStarted","Data":"07e75add28ef0df7b5942c39104f7ea9df7d2089f2f803091f827e4059c1e865"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.618422 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.623215 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"d20440fccb8cf56a77e893a04a997d768e52c3cf5a57205751936945c517021a"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.623394 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"b3ee68751cbabaaad82d146603d04f50ce8d5a53a9adeb619be5e11e4be239fa"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.623408 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"ec82734dc5c14e055479b1f98703723022d9b1839880eb92647b0d85f20072d5"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.623418 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"9e035b09af7cffa2eb69f770269fbfec31a9439a3a2698d80c38ef68aa0170fe"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.634101 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" event={"ID":"1672fa36-cd09-47c9-bb88-ab33ef7e7e66","Type":"ContainerStarted","Data":"eb64f96f91b85852045573b3ec900f79c6f63a221e83f8c5b6dde3b2eff64d58"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.636329 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" event={"ID":"e38b1ee1-d5e1-470c-a326-90f20b2f9050","Type":"ContainerStarted","Data":"650fa03172a168c52002d58c0c156b215f6c624fbc809ab937ab8399e64435be"} Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.636856 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.643941 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" podStartSLOduration=3.445887371 podStartE2EDuration="10.643924453s" podCreationTimestamp="2026-01-26 15:41:34 +0000 UTC" firstStartedPulling="2026-01-26 15:41:36.848332691 +0000 UTC m=+454.630213084" lastFinishedPulling="2026-01-26 15:41:44.046369773 +0000 UTC m=+461.828250166" observedRunningTime="2026-01-26 15:41:44.643194526 +0000 UTC m=+462.425074919" watchObservedRunningTime="2026-01-26 15:41:44.643924453 +0000 UTC m=+462.425804846" Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.648707 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.662250 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6779cbc9cd-ptgtl" podStartSLOduration=2.359476361 podStartE2EDuration="6.662226245s" podCreationTimestamp="2026-01-26 15:41:38 +0000 UTC" firstStartedPulling="2026-01-26 15:41:39.647303466 +0000 UTC m=+457.429183859" lastFinishedPulling="2026-01-26 15:41:43.95005335 +0000 UTC m=+461.731933743" observedRunningTime="2026-01-26 15:41:44.661111949 +0000 UTC m=+462.442992352" watchObservedRunningTime="2026-01-26 15:41:44.662226245 +0000 UTC m=+462.444106638" Jan 26 15:41:44 crc kubenswrapper[4896]: I0126 15:41:44.687173 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" podStartSLOduration=1.7386190529999999 podStartE2EDuration="6.687152444s" podCreationTimestamp="2026-01-26 15:41:38 +0000 UTC" firstStartedPulling="2026-01-26 15:41:39.002511292 +0000 UTC m=+456.784391685" lastFinishedPulling="2026-01-26 15:41:43.951044683 +0000 UTC m=+461.732925076" observedRunningTime="2026-01-26 15:41:44.679724028 +0000 UTC m=+462.461604421" watchObservedRunningTime="2026-01-26 15:41:44.687152444 +0000 UTC m=+462.469032837" Jan 26 15:41:45 crc kubenswrapper[4896]: I0126 15:41:45.371811 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" Jan 26 15:41:45 crc kubenswrapper[4896]: I0126 15:41:45.645987 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"1ab46061de3b1f29c35235c796cca3e70576d03c8e89677ab16437444739cfe6"} Jan 26 15:41:45 crc kubenswrapper[4896]: I0126 15:41:45.646067 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8fa08097-ed36-4af5-8e5b-1121ef06a34f","Type":"ContainerStarted","Data":"1bbbc730d02b7c6305834f21eeaee4db405c0822f2a3ea81801ed365873535f9"} Jan 26 15:41:45 crc kubenswrapper[4896]: I0126 15:41:45.690507 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.519090607 podStartE2EDuration="12.690477149s" podCreationTimestamp="2026-01-26 15:41:33 +0000 UTC" firstStartedPulling="2026-01-26 15:41:34.779063867 +0000 UTC m=+452.560944260" lastFinishedPulling="2026-01-26 15:41:43.950450409 +0000 UTC m=+461.732330802" observedRunningTime="2026-01-26 15:41:45.690258544 +0000 UTC m=+463.472138957" watchObservedRunningTime="2026-01-26 15:41:45.690477149 +0000 UTC m=+463.472357582" Jan 26 15:41:48 crc kubenswrapper[4896]: I0126 15:41:48.359452 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:48 crc kubenswrapper[4896]: I0126 15:41:48.359781 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:48 crc kubenswrapper[4896]: I0126 15:41:48.364175 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:48 crc kubenswrapper[4896]: I0126 15:41:48.820770 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:41:48 crc kubenswrapper[4896]: I0126 15:41:48.879080 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:41:49 crc kubenswrapper[4896]: I0126 15:41:49.843430 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"ac1a0db9f8f000e12769f2cc61b6eefc3275de25176fe59d30a0431942945582"} Jan 26 15:41:49 crc kubenswrapper[4896]: I0126 15:41:49.844004 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"2c43dc1cc26aa70f0177a4ba9c98ad2a1facd1d688c5f4a331ef6a0bce2e4c74"} Jan 26 15:41:49 crc kubenswrapper[4896]: I0126 15:41:49.844032 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"07cdc7f9615a214ee8f78b9438a3ecf5d2381ee56e7a31ad39f1f8823fd099ea"} Jan 26 15:41:49 crc kubenswrapper[4896]: I0126 15:41:49.844045 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"6c33de043225ce5c40009a6154ac566c698e4df3108864c1b48cb3b5dd17b86a"} Jan 26 15:41:49 crc kubenswrapper[4896]: I0126 15:41:49.844058 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"3e621ee37452a49f89c5d3dea088ea88518585a66040dae9f8117931abdcee6c"} Jan 26 15:41:50 crc kubenswrapper[4896]: I0126 15:41:50.855208 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2047fd01-a581-40bc-9865-476ac9d1fae6","Type":"ContainerStarted","Data":"03417b7d6bd5b7ac2fb4d1fa4cce8956368f04af476efea5e89a58abc16a8ccb"} Jan 26 15:41:50 crc kubenswrapper[4896]: I0126 15:41:50.891502 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.643384104 podStartE2EDuration="11.891484779s" podCreationTimestamp="2026-01-26 15:41:39 +0000 UTC" firstStartedPulling="2026-01-26 15:41:41.588972575 +0000 UTC m=+459.370852968" lastFinishedPulling="2026-01-26 15:41:48.83707325 +0000 UTC m=+466.618953643" observedRunningTime="2026-01-26 15:41:50.890358272 +0000 UTC m=+468.672238675" watchObservedRunningTime="2026-01-26 15:41:50.891484779 +0000 UTC m=+468.673365172" Jan 26 15:41:54 crc kubenswrapper[4896]: I0126 15:41:54.700888 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:41:58 crc kubenswrapper[4896]: I0126 15:41:58.397442 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:41:58 crc kubenswrapper[4896]: I0126 15:41:58.397815 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:13.957902 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z6479" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" containerID="cri-o://5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2" gracePeriod=15 Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.431891 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z6479_09601473-06d9-4938-876d-ea6e1b9ffc91/console/0.log" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.432420 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.467373 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.467442 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.467490 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468550 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468685 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config" (OuterVolumeSpecName: "console-config") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468746 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrj6\" (UniqueName: \"kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468789 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468894 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.468954 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca\") pod \"09601473-06d9-4938-876d-ea6e1b9ffc91\" (UID: \"09601473-06d9-4938-876d-ea6e1b9ffc91\") " Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.469278 4896 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.469293 4896 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.470256 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca" (OuterVolumeSpecName: "service-ca") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.470479 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.473944 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6" (OuterVolumeSpecName: "kube-api-access-bjrj6") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "kube-api-access-bjrj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.474565 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.474642 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "09601473-06d9-4938-876d-ea6e1b9ffc91" (UID: "09601473-06d9-4938-876d-ea6e1b9ffc91"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.571202 4896 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.571251 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrj6\" (UniqueName: \"kubernetes.io/projected/09601473-06d9-4938-876d-ea6e1b9ffc91-kube-api-access-bjrj6\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.571264 4896 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09601473-06d9-4938-876d-ea6e1b9ffc91-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.571274 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:14 crc kubenswrapper[4896]: I0126 15:42:14.571284 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09601473-06d9-4938-876d-ea6e1b9ffc91-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054142 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z6479_09601473-06d9-4938-876d-ea6e1b9ffc91/console/0.log" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054198 4896 generic.go:334] "Generic (PLEG): container finished" podID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerID="5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2" exitCode=2 Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054227 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z6479" event={"ID":"09601473-06d9-4938-876d-ea6e1b9ffc91","Type":"ContainerDied","Data":"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2"} Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054256 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z6479" event={"ID":"09601473-06d9-4938-876d-ea6e1b9ffc91","Type":"ContainerDied","Data":"3b3b9071820a43c5fc8026b13cf30b8c5c5e36a98bff55d6071d1d2eb574b967"} Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054276 4896 scope.go:117] "RemoveContainer" containerID="5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.054405 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z6479" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.075792 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.087518 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z6479"] Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.088538 4896 scope.go:117] "RemoveContainer" containerID="5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2" Jan 26 15:42:15 crc kubenswrapper[4896]: E0126 15:42:15.089150 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2\": container with ID starting with 5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2 not found: ID does not exist" containerID="5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.089215 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2"} err="failed to get container status \"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2\": rpc error: code = NotFound desc = could not find container \"5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2\": container with ID starting with 5324e5495ab2ca9a1b1918eed3dadf9fea6b440e68e439fe02b49850a9b1baa2 not found: ID does not exist" Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.426270 4896 patch_prober.go:28] interesting pod/console-f9d7485db-z6479 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 15:42:15 crc kubenswrapper[4896]: I0126 15:42:15.426725 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-z6479" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 15:42:16 crc kubenswrapper[4896]: I0126 15:42:16.768739 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" path="/var/lib/kubelet/pods/09601473-06d9-4938-876d-ea6e1b9ffc91/volumes" Jan 26 15:42:18 crc kubenswrapper[4896]: I0126 15:42:18.404935 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:42:18 crc kubenswrapper[4896]: I0126 15:42:18.414313 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" Jan 26 15:42:39 crc kubenswrapper[4896]: I0126 15:42:39.700056 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:42:39 crc kubenswrapper[4896]: I0126 15:42:39.735669 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:42:40 crc kubenswrapper[4896]: I0126 15:42:40.250000 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.771886 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:42:53 crc kubenswrapper[4896]: E0126 15:42:53.772762 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.772783 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.772959 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="09601473-06d9-4938-876d-ea6e1b9ffc91" containerName="console" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.773471 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775439 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775481 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775507 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775534 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775568 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775599 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.775618 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9z7\" (UniqueName: \"kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.782026 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.876556 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9z7\" (UniqueName: \"kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.876964 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.876992 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.877025 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.877062 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.877098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.877121 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.878056 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.878175 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.878862 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.882209 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.886777 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.894486 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9z7\" (UniqueName: \"kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:53 crc kubenswrapper[4896]: I0126 15:42:53.900460 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config\") pod \"console-544466ff54-8zfqf\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:54 crc kubenswrapper[4896]: I0126 15:42:54.118334 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:42:54 crc kubenswrapper[4896]: I0126 15:42:54.997950 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:42:55 crc kubenswrapper[4896]: W0126 15:42:55.002084 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64ac1ba_c007_4df3_8952_e65b53e18d91.slice/crio-d78eda112e6b0509dc203acb6f54bb432fba462e4d582514ca4a1012ca5abba2 WatchSource:0}: Error finding container d78eda112e6b0509dc203acb6f54bb432fba462e4d582514ca4a1012ca5abba2: Status 404 returned error can't find the container with id d78eda112e6b0509dc203acb6f54bb432fba462e4d582514ca4a1012ca5abba2 Jan 26 15:42:55 crc kubenswrapper[4896]: I0126 15:42:55.334150 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544466ff54-8zfqf" event={"ID":"f64ac1ba-c007-4df3-8952-e65b53e18d91","Type":"ContainerStarted","Data":"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06"} Jan 26 15:42:55 crc kubenswrapper[4896]: I0126 15:42:55.334405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544466ff54-8zfqf" event={"ID":"f64ac1ba-c007-4df3-8952-e65b53e18d91","Type":"ContainerStarted","Data":"d78eda112e6b0509dc203acb6f54bb432fba462e4d582514ca4a1012ca5abba2"} Jan 26 15:42:55 crc kubenswrapper[4896]: I0126 15:42:55.353934 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-544466ff54-8zfqf" podStartSLOduration=2.353911921 podStartE2EDuration="2.353911921s" podCreationTimestamp="2026-01-26 15:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:42:55.350165523 +0000 UTC m=+533.132045926" watchObservedRunningTime="2026-01-26 15:42:55.353911921 +0000 UTC m=+533.135792314" Jan 26 15:43:04 crc kubenswrapper[4896]: I0126 15:43:04.118619 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:43:04 crc kubenswrapper[4896]: I0126 15:43:04.119161 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:43:04 crc kubenswrapper[4896]: I0126 15:43:04.123542 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:43:04 crc kubenswrapper[4896]: I0126 15:43:04.406240 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:43:04 crc kubenswrapper[4896]: I0126 15:43:04.489788 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:43:29 crc kubenswrapper[4896]: I0126 15:43:29.537504 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5555c78d9f-4chpf" podUID="2c22b97c-9abb-4b25-9831-daea2ae48af0" containerName="console" containerID="cri-o://1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f" gracePeriod=15 Jan 26 15:43:29 crc kubenswrapper[4896]: E0126 15:43:29.732047 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c22b97c_9abb_4b25_9831_daea2ae48af0.slice/crio-conmon-1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:43:29 crc kubenswrapper[4896]: I0126 15:43:29.989803 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5555c78d9f-4chpf_2c22b97c-9abb-4b25-9831-daea2ae48af0/console/0.log" Jan 26 15:43:29 crc kubenswrapper[4896]: I0126 15:43:29.989881 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158162 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158231 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158297 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158322 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158416 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nmdx\" (UniqueName: \"kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158450 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.158483 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca\") pod \"2c22b97c-9abb-4b25-9831-daea2ae48af0\" (UID: \"2c22b97c-9abb-4b25-9831-daea2ae48af0\") " Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.159302 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.159390 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config" (OuterVolumeSpecName: "console-config") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.159409 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca" (OuterVolumeSpecName: "service-ca") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.159432 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.164238 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.164417 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx" (OuterVolumeSpecName: "kube-api-access-5nmdx") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "kube-api-access-5nmdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.171756 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2c22b97c-9abb-4b25-9831-daea2ae48af0" (UID: "2c22b97c-9abb-4b25-9831-daea2ae48af0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260010 4896 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260398 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260412 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nmdx\" (UniqueName: \"kubernetes.io/projected/2c22b97c-9abb-4b25-9831-daea2ae48af0-kube-api-access-5nmdx\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260458 4896 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260471 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260482 4896 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c22b97c-9abb-4b25-9831-daea2ae48af0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.260493 4896 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c22b97c-9abb-4b25-9831-daea2ae48af0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659097 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5555c78d9f-4chpf_2c22b97c-9abb-4b25-9831-daea2ae48af0/console/0.log" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659157 4896 generic.go:334] "Generic (PLEG): container finished" podID="2c22b97c-9abb-4b25-9831-daea2ae48af0" containerID="1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f" exitCode=2 Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659192 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5555c78d9f-4chpf" event={"ID":"2c22b97c-9abb-4b25-9831-daea2ae48af0","Type":"ContainerDied","Data":"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f"} Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659229 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5555c78d9f-4chpf" event={"ID":"2c22b97c-9abb-4b25-9831-daea2ae48af0","Type":"ContainerDied","Data":"b6720c02a2c2c46f1da48d280ffffb82a114e41610057701d785d894aabc28e6"} Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659231 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5555c78d9f-4chpf" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.659251 4896 scope.go:117] "RemoveContainer" containerID="1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.677709 4896 scope.go:117] "RemoveContainer" containerID="1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f" Jan 26 15:43:30 crc kubenswrapper[4896]: E0126 15:43:30.678220 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f\": container with ID starting with 1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f not found: ID does not exist" containerID="1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.678305 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f"} err="failed to get container status \"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f\": rpc error: code = NotFound desc = could not find container \"1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f\": container with ID starting with 1f85e0f27dca829bb0908666538542513bb02b6aabef0609892f83afda66878f not found: ID does not exist" Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.686724 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.691347 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5555c78d9f-4chpf"] Jan 26 15:43:30 crc kubenswrapper[4896]: I0126 15:43:30.767553 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c22b97c-9abb-4b25-9831-daea2ae48af0" path="/var/lib/kubelet/pods/2c22b97c-9abb-4b25-9831-daea2ae48af0/volumes" Jan 26 15:43:48 crc kubenswrapper[4896]: I0126 15:43:48.814407 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:43:48 crc kubenswrapper[4896]: I0126 15:43:48.815049 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:18 crc kubenswrapper[4896]: I0126 15:44:18.814082 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:18 crc kubenswrapper[4896]: I0126 15:44:18.814642 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:48 crc kubenswrapper[4896]: I0126 15:44:48.813933 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:44:48 crc kubenswrapper[4896]: I0126 15:44:48.814950 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:44:48 crc kubenswrapper[4896]: I0126 15:44:48.815030 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:44:48 crc kubenswrapper[4896]: I0126 15:44:48.816237 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:44:48 crc kubenswrapper[4896]: I0126 15:44:48.816347 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1" gracePeriod=600 Jan 26 15:44:49 crc kubenswrapper[4896]: I0126 15:44:49.192263 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1" exitCode=0 Jan 26 15:44:49 crc kubenswrapper[4896]: I0126 15:44:49.192333 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1"} Jan 26 15:44:49 crc kubenswrapper[4896]: I0126 15:44:49.192634 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775"} Jan 26 15:44:49 crc kubenswrapper[4896]: I0126 15:44:49.192655 4896 scope.go:117] "RemoveContainer" containerID="1da793905c9eeaa4f3946b7eeade08fb2161dbfe4af7683b808a647a5dfa8236" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.186622 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd"] Jan 26 15:45:00 crc kubenswrapper[4896]: E0126 15:45:00.187453 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c22b97c-9abb-4b25-9831-daea2ae48af0" containerName="console" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.187467 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c22b97c-9abb-4b25-9831-daea2ae48af0" containerName="console" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.187677 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c22b97c-9abb-4b25-9831-daea2ae48af0" containerName="console" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.188129 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.190569 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.190860 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.200110 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd"] Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.281069 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.281178 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nd2s\" (UniqueName: \"kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.281218 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.382848 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.382954 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nd2s\" (UniqueName: \"kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.382987 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.383964 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.390022 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.398208 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nd2s\" (UniqueName: \"kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s\") pod \"collect-profiles-29490705-gjjgd\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:00 crc kubenswrapper[4896]: I0126 15:45:00.508888 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:01 crc kubenswrapper[4896]: I0126 15:45:01.528603 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd"] Jan 26 15:45:02 crc kubenswrapper[4896]: I0126 15:45:02.292227 4896 generic.go:334] "Generic (PLEG): container finished" podID="fffef558-48ba-43a0-81e5-a8c5801b3e8e" containerID="3ec4d1c59ef9fe1d61badad9d57be3f5924ed0bee5bf98e8fe2854e3c64aa652" exitCode=0 Jan 26 15:45:02 crc kubenswrapper[4896]: I0126 15:45:02.292631 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" event={"ID":"fffef558-48ba-43a0-81e5-a8c5801b3e8e","Type":"ContainerDied","Data":"3ec4d1c59ef9fe1d61badad9d57be3f5924ed0bee5bf98e8fe2854e3c64aa652"} Jan 26 15:45:02 crc kubenswrapper[4896]: I0126 15:45:02.292667 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" event={"ID":"fffef558-48ba-43a0-81e5-a8c5801b3e8e","Type":"ContainerStarted","Data":"84528697feca20d130c881bd47f41c9451d0ffe7d8af2e18e82073a10bdc0d9d"} Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.817210 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.895148 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume\") pod \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.895188 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume\") pod \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.895215 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nd2s\" (UniqueName: \"kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s\") pod \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\" (UID: \"fffef558-48ba-43a0-81e5-a8c5801b3e8e\") " Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.896017 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "fffef558-48ba-43a0-81e5-a8c5801b3e8e" (UID: "fffef558-48ba-43a0-81e5-a8c5801b3e8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.900009 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s" (OuterVolumeSpecName: "kube-api-access-6nd2s") pod "fffef558-48ba-43a0-81e5-a8c5801b3e8e" (UID: "fffef558-48ba-43a0-81e5-a8c5801b3e8e"). InnerVolumeSpecName "kube-api-access-6nd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.900180 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fffef558-48ba-43a0-81e5-a8c5801b3e8e" (UID: "fffef558-48ba-43a0-81e5-a8c5801b3e8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.996871 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fffef558-48ba-43a0-81e5-a8c5801b3e8e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.996903 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fffef558-48ba-43a0-81e5-a8c5801b3e8e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4896]: I0126 15:45:03.996916 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nd2s\" (UniqueName: \"kubernetes.io/projected/fffef558-48ba-43a0-81e5-a8c5801b3e8e-kube-api-access-6nd2s\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:04 crc kubenswrapper[4896]: I0126 15:45:04.307424 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" event={"ID":"fffef558-48ba-43a0-81e5-a8c5801b3e8e","Type":"ContainerDied","Data":"84528697feca20d130c881bd47f41c9451d0ffe7d8af2e18e82073a10bdc0d9d"} Jan 26 15:45:04 crc kubenswrapper[4896]: I0126 15:45:04.307481 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84528697feca20d130c881bd47f41c9451d0ffe7d8af2e18e82073a10bdc0d9d" Jan 26 15:45:04 crc kubenswrapper[4896]: I0126 15:45:04.307547 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd" Jan 26 15:46:49 crc kubenswrapper[4896]: I0126 15:46:49.777279 4896 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.800037 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs"] Jan 26 15:47:04 crc kubenswrapper[4896]: E0126 15:47:04.801062 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffef558-48ba-43a0-81e5-a8c5801b3e8e" containerName="collect-profiles" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.801079 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffef558-48ba-43a0-81e5-a8c5801b3e8e" containerName="collect-profiles" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.801210 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffef558-48ba-43a0-81e5-a8c5801b3e8e" containerName="collect-profiles" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.802060 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.804440 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.822468 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs"] Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.904387 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.904460 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:04 crc kubenswrapper[4896]: I0126 15:47:04.904720 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vlv\" (UniqueName: \"kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.006765 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.006877 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.007008 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vlv\" (UniqueName: \"kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.008411 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.008535 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.038930 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vlv\" (UniqueName: \"kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.120542 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:05 crc kubenswrapper[4896]: I0126 15:47:05.566955 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs"] Jan 26 15:47:06 crc kubenswrapper[4896]: I0126 15:47:06.545394 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerStarted","Data":"bc9243f9f2e666927b5289f2d667ddea57c59038abca88569563cf829de1c1ab"} Jan 26 15:47:06 crc kubenswrapper[4896]: I0126 15:47:06.545466 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerStarted","Data":"e073c9cac07a4098a40f34c29dd0df4b1ab88262e51042fc6c8c5e744e512292"} Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.153488 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.155113 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.165631 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.289208 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.289418 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.289463 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.390897 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.390956 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.391068 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.391737 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.391759 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.415058 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") pod \"redhat-operators-t5m2z\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.477056 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.562425 4896 generic.go:334] "Generic (PLEG): container finished" podID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerID="bc9243f9f2e666927b5289f2d667ddea57c59038abca88569563cf829de1c1ab" exitCode=0 Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.562465 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerDied","Data":"bc9243f9f2e666927b5289f2d667ddea57c59038abca88569563cf829de1c1ab"} Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.563931 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:47:07 crc kubenswrapper[4896]: I0126 15:47:07.790119 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:08 crc kubenswrapper[4896]: I0126 15:47:08.571552 4896 generic.go:334] "Generic (PLEG): container finished" podID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerID="2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3" exitCode=0 Jan 26 15:47:08 crc kubenswrapper[4896]: I0126 15:47:08.571652 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerDied","Data":"2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3"} Jan 26 15:47:08 crc kubenswrapper[4896]: I0126 15:47:08.571682 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerStarted","Data":"5b5e8d96c9ab83ab541c05f239fd5eb0819372180f06e7fa0a62a6ea05901e3f"} Jan 26 15:47:09 crc kubenswrapper[4896]: I0126 15:47:09.579723 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerStarted","Data":"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd"} Jan 26 15:47:09 crc kubenswrapper[4896]: I0126 15:47:09.585321 4896 generic.go:334] "Generic (PLEG): container finished" podID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerID="f5c1bc0136ee9f0589f58abe7ec688d745bd28091879cd324ca08a8f062844fb" exitCode=0 Jan 26 15:47:09 crc kubenswrapper[4896]: I0126 15:47:09.585387 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerDied","Data":"f5c1bc0136ee9f0589f58abe7ec688d745bd28091879cd324ca08a8f062844fb"} Jan 26 15:47:10 crc kubenswrapper[4896]: I0126 15:47:10.610179 4896 generic.go:334] "Generic (PLEG): container finished" podID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerID="43d302b87092064c998cde84763e63b27beaaa35807484b0fd486e5f4c181aec" exitCode=0 Jan 26 15:47:10 crc kubenswrapper[4896]: I0126 15:47:10.610382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerDied","Data":"43d302b87092064c998cde84763e63b27beaaa35807484b0fd486e5f4c181aec"} Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.362076 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.429316 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util\") pod \"b5a125e8-a2db-49bf-b882-8c26600a229b\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.429702 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle\") pod \"b5a125e8-a2db-49bf-b882-8c26600a229b\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.429839 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9vlv\" (UniqueName: \"kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv\") pod \"b5a125e8-a2db-49bf-b882-8c26600a229b\" (UID: \"b5a125e8-a2db-49bf-b882-8c26600a229b\") " Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.431773 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle" (OuterVolumeSpecName: "bundle") pod "b5a125e8-a2db-49bf-b882-8c26600a229b" (UID: "b5a125e8-a2db-49bf-b882-8c26600a229b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.440367 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util" (OuterVolumeSpecName: "util") pod "b5a125e8-a2db-49bf-b882-8c26600a229b" (UID: "b5a125e8-a2db-49bf-b882-8c26600a229b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.482830 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv" (OuterVolumeSpecName: "kube-api-access-b9vlv") pod "b5a125e8-a2db-49bf-b882-8c26600a229b" (UID: "b5a125e8-a2db-49bf-b882-8c26600a229b"). InnerVolumeSpecName "kube-api-access-b9vlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.532861 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.532913 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5a125e8-a2db-49bf-b882-8c26600a229b-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.532929 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9vlv\" (UniqueName: \"kubernetes.io/projected/b5a125e8-a2db-49bf-b882-8c26600a229b-kube-api-access-b9vlv\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.636064 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" event={"ID":"b5a125e8-a2db-49bf-b882-8c26600a229b","Type":"ContainerDied","Data":"e073c9cac07a4098a40f34c29dd0df4b1ab88262e51042fc6c8c5e744e512292"} Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.636112 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e073c9cac07a4098a40f34c29dd0df4b1ab88262e51042fc6c8c5e744e512292" Jan 26 15:47:12 crc kubenswrapper[4896]: I0126 15:47:12.636184 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs" Jan 26 15:47:13 crc kubenswrapper[4896]: I0126 15:47:13.644800 4896 generic.go:334] "Generic (PLEG): container finished" podID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerID="262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd" exitCode=0 Jan 26 15:47:13 crc kubenswrapper[4896]: I0126 15:47:13.644845 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerDied","Data":"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd"} Jan 26 15:47:14 crc kubenswrapper[4896]: I0126 15:47:14.655757 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerStarted","Data":"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19"} Jan 26 15:47:14 crc kubenswrapper[4896]: I0126 15:47:14.677490 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5m2z" podStartSLOduration=2.154641393 podStartE2EDuration="7.67745732s" podCreationTimestamp="2026-01-26 15:47:07 +0000 UTC" firstStartedPulling="2026-01-26 15:47:08.573372008 +0000 UTC m=+786.355252401" lastFinishedPulling="2026-01-26 15:47:14.096187935 +0000 UTC m=+791.878068328" observedRunningTime="2026-01-26 15:47:14.675742458 +0000 UTC m=+792.457622861" watchObservedRunningTime="2026-01-26 15:47:14.67745732 +0000 UTC m=+792.459337713" Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.450397 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdszn"] Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451529 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-controller" containerID="cri-o://f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451644 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="nbdb" containerID="cri-o://a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451708 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451761 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-node" containerID="cri-o://406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451692 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="northd" containerID="cri-o://67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451806 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-acl-logging" containerID="cri-o://75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.451941 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="sbdb" containerID="cri-o://d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: I0126 15:47:15.534153 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" containerID="cri-o://e44e909f11df7d386f4426e644f72e40396ab0c1f0135682fa60da8c9dc8468f" gracePeriod=30 Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.835889 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1 is running failed: container process not found" containerID="d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836032 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d is running failed: container process not found" containerID="a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836345 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d is running failed: container process not found" containerID="a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836438 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1 is running failed: container process not found" containerID="d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836795 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1 is running failed: container process not found" containerID="d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836835 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="sbdb" Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.836817 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d is running failed: container process not found" containerID="a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 26 15:47:15 crc kubenswrapper[4896]: E0126 15:47:15.837044 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="nbdb" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.669572 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/2.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.670458 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/1.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.670542 4896 generic.go:334] "Generic (PLEG): container finished" podID="8c4023ce-9d03-491a-bbc6-d5afffb92f34" containerID="a2cf36ac3c72179e799a5212ae24d33ce99cd4f0f8a6e255eabc6bb2e8182ab6" exitCode=2 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.670655 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerDied","Data":"a2cf36ac3c72179e799a5212ae24d33ce99cd4f0f8a6e255eabc6bb2e8182ab6"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.670754 4896 scope.go:117] "RemoveContainer" containerID="a96dbd35e9bd29cc89ad9d1102bb1649492ceb1f340573ebb153accc49bb967b" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.671286 4896 scope.go:117] "RemoveContainer" containerID="a2cf36ac3c72179e799a5212ae24d33ce99cd4f0f8a6e255eabc6bb2e8182ab6" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.675110 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovnkube-controller/3.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.679801 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-acl-logging/0.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.681933 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-controller/0.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683059 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="e44e909f11df7d386f4426e644f72e40396ab0c1f0135682fa60da8c9dc8468f" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683152 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683220 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683282 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683349 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683402 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195" exitCode=0 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683451 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91" exitCode=143 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683504 4896 generic.go:334] "Generic (PLEG): container finished" podID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerID="f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803" exitCode=143 Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683604 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"e44e909f11df7d386f4426e644f72e40396ab0c1f0135682fa60da8c9dc8468f"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683698 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683786 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.683944 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.684020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.684097 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.684170 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803"} Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.721947 4896 scope.go:117] "RemoveContainer" containerID="aaa886cbf9a7cfded4ea830a53ecfacb4587bab5647c878d5ba8047b69c9fbe9" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.947157 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-acl-logging/0.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.947554 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-controller/0.log" Jan 26 15:47:16 crc kubenswrapper[4896]: I0126 15:47:16.947926 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.125735 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5pvz8"] Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126091 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="extract" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126118 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="extract" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126141 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126154 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126169 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="sbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126185 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="sbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126207 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="pull" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126222 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="pull" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126243 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-acl-logging" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126255 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-acl-logging" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126271 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126283 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126302 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kubecfg-setup" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126316 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kubecfg-setup" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126333 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="northd" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126344 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="northd" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126358 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126370 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126386 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="util" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126397 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="util" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.126420 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126433 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.127078 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127092 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.127112 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-node" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127125 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-node" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.127143 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="nbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127154 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="nbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127339 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127360 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="nbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127377 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127391 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127408 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="northd" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127429 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127446 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="sbdb" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127465 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127482 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a125e8-a2db-49bf-b882-8c26600a229b" containerName="extract" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127499 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovn-acl-logging" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127516 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127535 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="kube-rbac-proxy-node" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.127731 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127750 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: E0126 15:47:17.127767 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127781 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127964 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" containerName="ovnkube-controller" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.126923 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.129830 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.129898 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.129967 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130030 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130094 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130155 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130215 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130275 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130358 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130430 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130492 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130595 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130672 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130754 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130816 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5jvk\" (UniqueName: \"kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130879 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130954 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131012 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131067 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash\") pod \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\" (UID: \"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8\") " Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130444 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.127031 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket" (OuterVolumeSpecName: "log-socket") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130496 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130521 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.130540 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131488 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash" (OuterVolumeSpecName: "host-slash") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131504 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131517 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131895 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.131913 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log" (OuterVolumeSpecName: "node-log") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.132246 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.132270 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133149 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133597 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133765 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133789 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133812 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.133836 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237244 4896 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237717 4896 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237774 4896 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237826 4896 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237933 4896 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.237998 4896 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238052 4896 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238102 4896 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238157 4896 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238206 4896 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238260 4896 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238341 4896 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238397 4896 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238449 4896 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238499 4896 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238549 4896 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.238623 4896 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.327244 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.327942 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.331648 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk" (OuterVolumeSpecName: "kube-api-access-b5jvk") pod "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" (UID: "e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8"). InnerVolumeSpecName "kube-api-access-b5jvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.340686 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-systemd-units\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.340914 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-log-socket\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341011 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-bin\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341145 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-config\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341236 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-ovn\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341361 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-env-overrides\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341492 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-netd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341730 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovn-node-metrics-cert\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341849 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.341962 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-slash\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342039 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-netns\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342113 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342190 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342267 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24n8b\" (UniqueName: \"kubernetes.io/projected/d6c34102-f8ab-41e8-9e2d-68b843ee4976-kube-api-access-24n8b\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342339 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-systemd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342416 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-var-lib-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342536 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-etc-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342689 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-node-log\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342775 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-kubelet\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.342869 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-script-lib\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.343003 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5jvk\" (UniqueName: \"kubernetes.io/projected/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-kube-api-access-b5jvk\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.343069 4896 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.343128 4896 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444178 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-config\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444445 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-ovn\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444523 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-env-overrides\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444606 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-netd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444679 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovn-node-metrics-cert\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444751 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444814 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-slash\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444873 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-netns\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.444937 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445003 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445076 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24n8b\" (UniqueName: \"kubernetes.io/projected/d6c34102-f8ab-41e8-9e2d-68b843ee4976-kube-api-access-24n8b\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445135 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-ovn\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445137 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-systemd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445184 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-netd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445213 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-var-lib-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445272 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-etc-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445300 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-node-log\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-kubelet\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445345 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-script-lib\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445388 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-systemd-units\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445405 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-log-socket\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445421 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-bin\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445494 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-cni-bin\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445504 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-systemd\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445621 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-env-overrides\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445677 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-node-log\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445149 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-config\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445723 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-var-lib-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445770 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-etc-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445842 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-slash\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445111 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-run-openvswitch\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445921 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-netns\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445949 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-run-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445974 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-systemd-units\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.445997 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-kubelet\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.446021 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-log-socket\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.446154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6c34102-f8ab-41e8-9e2d-68b843ee4976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.446556 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovnkube-script-lib\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.448840 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6c34102-f8ab-41e8-9e2d-68b843ee4976-ovn-node-metrics-cert\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.477731 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.477804 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.739712 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nd8b_8c4023ce-9d03-491a-bbc6-d5afffb92f34/kube-multus/2.log" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.739790 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nd8b" event={"ID":"8c4023ce-9d03-491a-bbc6-d5afffb92f34","Type":"ContainerStarted","Data":"a3a2a324cb63689388cc86c9dcc80446144efd777b61e4bb6a1346aae3921c51"} Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.745156 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24n8b\" (UniqueName: \"kubernetes.io/projected/d6c34102-f8ab-41e8-9e2d-68b843ee4976-kube-api-access-24n8b\") pod \"ovnkube-node-5pvz8\" (UID: \"d6c34102-f8ab-41e8-9e2d-68b843ee4976\") " pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.758184 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-acl-logging/0.log" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.758593 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdszn_e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/ovn-controller/0.log" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.758911 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" event={"ID":"e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8","Type":"ContainerDied","Data":"b69775f7b723ba75176fb53988b385d82913cef3d27db601904ac4035de2ee74"} Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.758944 4896 scope.go:117] "RemoveContainer" containerID="e44e909f11df7d386f4426e644f72e40396ab0c1f0135682fa60da8c9dc8468f" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.759058 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdszn" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.884189 4896 scope.go:117] "RemoveContainer" containerID="d3d3b4d4d136ea02114fd816ba32cc0a4d38c1b2d8df7968e426c038ae37dbd1" Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.885284 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdszn"] Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.896062 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdszn"] Jan 26 15:47:17 crc kubenswrapper[4896]: I0126 15:47:17.901951 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:17.959710 4896 scope.go:117] "RemoveContainer" containerID="a7bb5d0fd3d779d1861fdd69f46697e53173c508525fb96bb7c8825505e05e1d" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.160766 4896 scope.go:117] "RemoveContainer" containerID="67feca97cda454cd70acfad46a99dd5696618f8d1f1e3d887a0c32ae9b6a475f" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.177956 4896 scope.go:117] "RemoveContainer" containerID="13e5f096fb36bb92606a247123774c6155ae2811324579470faf1c04456da53f" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.229797 4896 scope.go:117] "RemoveContainer" containerID="406b020065f8bf0ba4a4cccd4acff46627b58f12033ca230665dbbf3a2a1e195" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.302789 4896 scope.go:117] "RemoveContainer" containerID="75a326550b388ea7e5eea65a62c945fe87ba4ee09b82f0ca590226d51db74a91" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.323143 4896 scope.go:117] "RemoveContainer" containerID="f957437952e418fe12314db00c66884b604eaf77dbee831de77ee2a4e085c803" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.364274 4896 scope.go:117] "RemoveContainer" containerID="8fb07b1cf9f2952471806ce850eb52887d8e91fd418efb8de8aad1f617e753a8" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.699931 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5m2z" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="registry-server" probeResult="failure" output=< Jan 26 15:47:18 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:47:18 crc kubenswrapper[4896]: > Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.773935 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8" path="/var/lib/kubelet/pods/e1a5b0ee-af46-4fbc-92cc-4a63974fe8d8/volumes" Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.779291 4896 generic.go:334] "Generic (PLEG): container finished" podID="d6c34102-f8ab-41e8-9e2d-68b843ee4976" containerID="85e0154e7cdd337297753b8621aee386c3d345d4d98fdd173bbd350fc336eb0b" exitCode=0 Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.779328 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerDied","Data":"85e0154e7cdd337297753b8621aee386c3d345d4d98fdd173bbd350fc336eb0b"} Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.779350 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"000c3d65f7dd006ee737a2b94500d284fb1c3cab2dce0ed2776704398d308210"} Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.813726 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:47:18 crc kubenswrapper[4896]: I0126 15:47:18.813782 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:47:19 crc kubenswrapper[4896]: I0126 15:47:19.794046 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"9bde6b2cc3d5964a8f272a4b15fffa8503fa2940d40288d3a22237b9b97ad0e9"} Jan 26 15:47:19 crc kubenswrapper[4896]: I0126 15:47:19.794378 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"2a7d528c157b64c18ecc8e3432902edf8f0d304bd722721321a3f5dcbb3098ed"} Jan 26 15:47:20 crc kubenswrapper[4896]: I0126 15:47:20.805463 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"7a937b389a2844827c17caa62f08dfea5fc543800796d5bab2d9cd3419f697d1"} Jan 26 15:47:20 crc kubenswrapper[4896]: I0126 15:47:20.805887 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"39971fca68bb60fc9f1e7fac970fdc493464afdf93baaad1eb305d58551c99e1"} Jan 26 15:47:20 crc kubenswrapper[4896]: I0126 15:47:20.805899 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"6817c13042f5fe3f1e48f6141ba9e31cd5e2073299f921bdca11f5f41907dcc9"} Jan 26 15:47:20 crc kubenswrapper[4896]: I0126 15:47:20.805908 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"19490593f6157db67e47818f42e6dcfe3324fb595417e1c82aefb95e60f9909c"} Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.234420 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn"] Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.235958 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.240410 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.242170 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.242170 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-97xs8" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.291080 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm"] Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.292760 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.294920 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.294943 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-72x7m" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.303765 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb"] Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.305266 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.345610 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4w9\" (UniqueName: \"kubernetes.io/projected/d6db188e-bd3c-49e5-800c-2f6706ca8b45-kube-api-access-rm4w9\") pod \"obo-prometheus-operator-68bc856cb9-sjfrn\" (UID: \"d6db188e-bd3c-49e5-800c-2f6706ca8b45\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.447551 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.447731 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.447871 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.447986 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.448048 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4w9\" (UniqueName: \"kubernetes.io/projected/d6db188e-bd3c-49e5-800c-2f6706ca8b45-kube-api-access-rm4w9\") pod \"obo-prometheus-operator-68bc856cb9-sjfrn\" (UID: \"d6db188e-bd3c-49e5-800c-2f6706ca8b45\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.456513 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b58h7"] Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.457300 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.463239 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9x7bf" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.463968 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.488497 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4w9\" (UniqueName: \"kubernetes.io/projected/d6db188e-bd3c-49e5-800c-2f6706ca8b45-kube-api-access-rm4w9\") pod \"obo-prometheus-operator-68bc856cb9-sjfrn\" (UID: \"d6db188e-bd3c-49e5-800c-2f6706ca8b45\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.549592 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.549697 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.549760 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.549826 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.553112 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.555224 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.555391 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.558747 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/482717fd-6a21-44de-a4d1-e08d5324552b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm\" (UID: \"482717fd-6a21-44de-a4d1-e08d5324552b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.564995 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d1c2692-33ed-45a8-9fed-a2c9eb1b5212-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb\" (UID: \"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.566922 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s6xcx"] Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.567962 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.570179 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zjppl" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.596027 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(72cac5f26ecb4e1cff638e5fcc08f82c7d50400488fdb93756f1b5da08aa1f8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.596098 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(72cac5f26ecb4e1cff638e5fcc08f82c7d50400488fdb93756f1b5da08aa1f8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.596118 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(72cac5f26ecb4e1cff638e5fcc08f82c7d50400488fdb93756f1b5da08aa1f8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.596160 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators(d6db188e-bd3c-49e5-800c-2f6706ca8b45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators(d6db188e-bd3c-49e5-800c-2f6706ca8b45)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(72cac5f26ecb4e1cff638e5fcc08f82c7d50400488fdb93756f1b5da08aa1f8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" podUID="d6db188e-bd3c-49e5-800c-2f6706ca8b45" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.631662 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.648141 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.650952 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqbj\" (UniqueName: \"kubernetes.io/projected/39e44697-1997-402b-939f-641cb2f74176-kube-api-access-njqbj\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.651059 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/39e44697-1997-402b-939f-641cb2f74176-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.676492 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(98fb4738779245a859f2c9518875ab3c40fef28f70a74da0017de93775c00376): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.676617 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(98fb4738779245a859f2c9518875ab3c40fef28f70a74da0017de93775c00376): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.676639 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(98fb4738779245a859f2c9518875ab3c40fef28f70a74da0017de93775c00376): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.676708 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators(482717fd-6a21-44de-a4d1-e08d5324552b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators(482717fd-6a21-44de-a4d1-e08d5324552b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(98fb4738779245a859f2c9518875ab3c40fef28f70a74da0017de93775c00376): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" podUID="482717fd-6a21-44de-a4d1-e08d5324552b" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.684289 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(f02da8fd5475fe5ad4499feaaa3371d040fa862d2a10eb353fa47d8a86574b3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.684360 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(f02da8fd5475fe5ad4499feaaa3371d040fa862d2a10eb353fa47d8a86574b3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.684382 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(f02da8fd5475fe5ad4499feaaa3371d040fa862d2a10eb353fa47d8a86574b3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.684429 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators(1d1c2692-33ed-45a8-9fed-a2c9eb1b5212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators(1d1c2692-33ed-45a8-9fed-a2c9eb1b5212)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(f02da8fd5475fe5ad4499feaaa3371d040fa862d2a10eb353fa47d8a86574b3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" podUID="1d1c2692-33ed-45a8-9fed-a2c9eb1b5212" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.754927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d40f6e6-fe99-4a02-b499-83c0b6a61706-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.754979 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6wb\" (UniqueName: \"kubernetes.io/projected/4d40f6e6-fe99-4a02-b499-83c0b6a61706-kube-api-access-qc6wb\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.755017 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/39e44697-1997-402b-939f-641cb2f74176-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.755093 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqbj\" (UniqueName: \"kubernetes.io/projected/39e44697-1997-402b-939f-641cb2f74176-kube-api-access-njqbj\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.761470 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/39e44697-1997-402b-939f-641cb2f74176-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.783645 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqbj\" (UniqueName: \"kubernetes.io/projected/39e44697-1997-402b-939f-641cb2f74176-kube-api-access-njqbj\") pod \"observability-operator-59bdc8b94-b58h7\" (UID: \"39e44697-1997-402b-939f-641cb2f74176\") " pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.856018 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d40f6e6-fe99-4a02-b499-83c0b6a61706-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.857189 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc6wb\" (UniqueName: \"kubernetes.io/projected/4d40f6e6-fe99-4a02-b499-83c0b6a61706-kube-api-access-qc6wb\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.857131 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d40f6e6-fe99-4a02-b499-83c0b6a61706-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.864167 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"f5020932c5e5295186acb9826af3d6ff3bcea8aa7d98b8fd38fb0f8c014843fd"} Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.882648 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc6wb\" (UniqueName: \"kubernetes.io/projected/4d40f6e6-fe99-4a02-b499-83c0b6a61706-kube-api-access-qc6wb\") pod \"perses-operator-5bf474d74f-s6xcx\" (UID: \"4d40f6e6-fe99-4a02-b499-83c0b6a61706\") " pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: I0126 15:47:23.938795 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.970466 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(cd66f6648c984eea05a5458fb16ed4a05a26baabf77ce66eb39d8458ef4ecdd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.970515 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(cd66f6648c984eea05a5458fb16ed4a05a26baabf77ce66eb39d8458ef4ecdd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.970535 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(cd66f6648c984eea05a5458fb16ed4a05a26baabf77ce66eb39d8458ef4ecdd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:23 crc kubenswrapper[4896]: E0126 15:47:23.970571 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-s6xcx_openshift-operators(4d40f6e6-fe99-4a02-b499-83c0b6a61706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-s6xcx_openshift-operators(4d40f6e6-fe99-4a02-b499-83c0b6a61706)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(cd66f6648c984eea05a5458fb16ed4a05a26baabf77ce66eb39d8458ef4ecdd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" podUID="4d40f6e6-fe99-4a02-b499-83c0b6a61706" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.071917 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:24 crc kubenswrapper[4896]: E0126 15:47:24.096190 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(80efb7aac9be917e81fb511d6025a2c2f79728642bf1f2bfe95f8ffcac11b2bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:24 crc kubenswrapper[4896]: E0126 15:47:24.096644 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(80efb7aac9be917e81fb511d6025a2c2f79728642bf1f2bfe95f8ffcac11b2bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:24 crc kubenswrapper[4896]: E0126 15:47:24.096706 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(80efb7aac9be917e81fb511d6025a2c2f79728642bf1f2bfe95f8ffcac11b2bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:24 crc kubenswrapper[4896]: E0126 15:47:24.096764 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-b58h7_openshift-operators(39e44697-1997-402b-939f-641cb2f74176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-b58h7_openshift-operators(39e44697-1997-402b-939f-641cb2f74176)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(80efb7aac9be917e81fb511d6025a2c2f79728642bf1f2bfe95f8ffcac11b2bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" podUID="39e44697-1997-402b-939f-641cb2f74176" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.876641 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" event={"ID":"d6c34102-f8ab-41e8-9e2d-68b843ee4976","Type":"ContainerStarted","Data":"b13edf6950a7397cb1b076f01ec6415f55632d71a8ddd9e62472a114cbdb5075"} Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.878313 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.878373 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.878386 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.919838 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" podStartSLOduration=7.91981418 podStartE2EDuration="7.91981418s" podCreationTimestamp="2026-01-26 15:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:47:24.914801876 +0000 UTC m=+802.696682279" watchObservedRunningTime="2026-01-26 15:47:24.91981418 +0000 UTC m=+802.701694583" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.922119 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:24 crc kubenswrapper[4896]: I0126 15:47:24.950911 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.507078 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s6xcx"] Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.507200 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.507839 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.514307 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn"] Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.514447 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.515021 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.520612 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm"] Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.520716 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.521189 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.833853 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(921736f566668b6fa9ef42db0d8baf10930a3d051663eafda1d05fa1d2bb15d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.834242 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(921736f566668b6fa9ef42db0d8baf10930a3d051663eafda1d05fa1d2bb15d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.834359 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(921736f566668b6fa9ef42db0d8baf10930a3d051663eafda1d05fa1d2bb15d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.835140 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(d98bfa7643bd3e7690ffc1e06df68672e32ddf8dfba73a2f882a029870a43f39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.836774 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-s6xcx_openshift-operators(4d40f6e6-fe99-4a02-b499-83c0b6a61706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-s6xcx_openshift-operators(4d40f6e6-fe99-4a02-b499-83c0b6a61706)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-s6xcx_openshift-operators_4d40f6e6-fe99-4a02-b499-83c0b6a61706_0(921736f566668b6fa9ef42db0d8baf10930a3d051663eafda1d05fa1d2bb15d0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" podUID="4d40f6e6-fe99-4a02-b499-83c0b6a61706" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.836921 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(d98bfa7643bd3e7690ffc1e06df68672e32ddf8dfba73a2f882a029870a43f39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.836995 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(d98bfa7643bd3e7690ffc1e06df68672e32ddf8dfba73a2f882a029870a43f39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.837274 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators(482717fd-6a21-44de-a4d1-e08d5324552b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators(482717fd-6a21-44de-a4d1-e08d5324552b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_openshift-operators_482717fd-6a21-44de-a4d1-e08d5324552b_0(d98bfa7643bd3e7690ffc1e06df68672e32ddf8dfba73a2f882a029870a43f39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" podUID="482717fd-6a21-44de-a4d1-e08d5324552b" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.856108 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(1116f94513288d8a0e722a019fec9f0785e103e9b627ebab0d7ae32e51ef0f66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.856200 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(1116f94513288d8a0e722a019fec9f0785e103e9b627ebab0d7ae32e51ef0f66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.856230 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(1116f94513288d8a0e722a019fec9f0785e103e9b627ebab0d7ae32e51ef0f66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.856296 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators(d6db188e-bd3c-49e5-800c-2f6706ca8b45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators(d6db188e-bd3c-49e5-800c-2f6706ca8b45)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sjfrn_openshift-operators_d6db188e-bd3c-49e5-800c-2f6706ca8b45_0(1116f94513288d8a0e722a019fec9f0785e103e9b627ebab0d7ae32e51ef0f66): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" podUID="d6db188e-bd3c-49e5-800c-2f6706ca8b45" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.859634 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb"] Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.859675 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b58h7"] Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.859753 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.860282 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.860686 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:26 crc kubenswrapper[4896]: I0126 15:47:26.861000 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.889827 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(c4c2a5c9c809882b32d7bb1019c5ae1742e9f86173634b06108be71c58a787bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.889873 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(c4c2a5c9c809882b32d7bb1019c5ae1742e9f86173634b06108be71c58a787bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.889890 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(c4c2a5c9c809882b32d7bb1019c5ae1742e9f86173634b06108be71c58a787bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.889924 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-b58h7_openshift-operators(39e44697-1997-402b-939f-641cb2f74176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-b58h7_openshift-operators(39e44697-1997-402b-939f-641cb2f74176)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-b58h7_openshift-operators_39e44697-1997-402b-939f-641cb2f74176_0(c4c2a5c9c809882b32d7bb1019c5ae1742e9f86173634b06108be71c58a787bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" podUID="39e44697-1997-402b-939f-641cb2f74176" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.905686 4896 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(8306b4d6354c1d3ee86737e0248a1b2128bc019434cd87611913723455858e62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.905755 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(8306b4d6354c1d3ee86737e0248a1b2128bc019434cd87611913723455858e62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.905775 4896 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(8306b4d6354c1d3ee86737e0248a1b2128bc019434cd87611913723455858e62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:26 crc kubenswrapper[4896]: E0126 15:47:26.905828 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators(1d1c2692-33ed-45a8-9fed-a2c9eb1b5212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators(1d1c2692-33ed-45a8-9fed-a2c9eb1b5212)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_openshift-operators_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212_0(8306b4d6354c1d3ee86737e0248a1b2128bc019434cd87611913723455858e62): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" podUID="1d1c2692-33ed-45a8-9fed-a2c9eb1b5212" Jan 26 15:47:27 crc kubenswrapper[4896]: I0126 15:47:27.628606 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:27 crc kubenswrapper[4896]: I0126 15:47:27.749541 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:27 crc kubenswrapper[4896]: I0126 15:47:27.876347 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:28 crc kubenswrapper[4896]: I0126 15:47:28.898342 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5m2z" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="registry-server" containerID="cri-o://a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19" gracePeriod=2 Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.687513 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.817996 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities\") pod \"fb173889-a078-4f05-b1d5-1a805ee8336e\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.818072 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content\") pod \"fb173889-a078-4f05-b1d5-1a805ee8336e\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.818241 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") pod \"fb173889-a078-4f05-b1d5-1a805ee8336e\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.819202 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities" (OuterVolumeSpecName: "utilities") pod "fb173889-a078-4f05-b1d5-1a805ee8336e" (UID: "fb173889-a078-4f05-b1d5-1a805ee8336e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.924804 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb" (OuterVolumeSpecName: "kube-api-access-pxcwb") pod "fb173889-a078-4f05-b1d5-1a805ee8336e" (UID: "fb173889-a078-4f05-b1d5-1a805ee8336e"). InnerVolumeSpecName "kube-api-access-pxcwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.925109 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") pod \"fb173889-a078-4f05-b1d5-1a805ee8336e\" (UID: \"fb173889-a078-4f05-b1d5-1a805ee8336e\") " Jan 26 15:47:29 crc kubenswrapper[4896]: W0126 15:47:29.925369 4896 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fb173889-a078-4f05-b1d5-1a805ee8336e/volumes/kubernetes.io~projected/kube-api-access-pxcwb Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.925389 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb" (OuterVolumeSpecName: "kube-api-access-pxcwb") pod "fb173889-a078-4f05-b1d5-1a805ee8336e" (UID: "fb173889-a078-4f05-b1d5-1a805ee8336e"). InnerVolumeSpecName "kube-api-access-pxcwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.925773 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxcwb\" (UniqueName: \"kubernetes.io/projected/fb173889-a078-4f05-b1d5-1a805ee8336e-kube-api-access-pxcwb\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.925822 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.982814 4896 generic.go:334] "Generic (PLEG): container finished" podID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerID="a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19" exitCode=0 Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.982856 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerDied","Data":"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19"} Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.982883 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m2z" event={"ID":"fb173889-a078-4f05-b1d5-1a805ee8336e","Type":"ContainerDied","Data":"5b5e8d96c9ab83ab541c05f239fd5eb0819372180f06e7fa0a62a6ea05901e3f"} Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.982904 4896 scope.go:117] "RemoveContainer" containerID="a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19" Jan 26 15:47:29 crc kubenswrapper[4896]: I0126 15:47:29.983055 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m2z" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.006407 4896 scope.go:117] "RemoveContainer" containerID="262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.052444 4896 scope.go:117] "RemoveContainer" containerID="2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.076366 4896 scope.go:117] "RemoveContainer" containerID="a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.079218 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb173889-a078-4f05-b1d5-1a805ee8336e" (UID: "fb173889-a078-4f05-b1d5-1a805ee8336e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:30 crc kubenswrapper[4896]: E0126 15:47:30.082686 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19\": container with ID starting with a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19 not found: ID does not exist" containerID="a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.082723 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19"} err="failed to get container status \"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19\": rpc error: code = NotFound desc = could not find container \"a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19\": container with ID starting with a7abdc153305f022b1a311f9af2dd4c6639fb0e00b796af0fe0dfde16ce67f19 not found: ID does not exist" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.082749 4896 scope.go:117] "RemoveContainer" containerID="262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd" Jan 26 15:47:30 crc kubenswrapper[4896]: E0126 15:47:30.082961 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd\": container with ID starting with 262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd not found: ID does not exist" containerID="262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.082981 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd"} err="failed to get container status \"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd\": rpc error: code = NotFound desc = could not find container \"262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd\": container with ID starting with 262ed9aabaa08f334825a5dcb853fbc54e6e50ced4420fef8c994761f8026cdd not found: ID does not exist" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.083005 4896 scope.go:117] "RemoveContainer" containerID="2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3" Jan 26 15:47:30 crc kubenswrapper[4896]: E0126 15:47:30.083208 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3\": container with ID starting with 2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3 not found: ID does not exist" containerID="2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.083230 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3"} err="failed to get container status \"2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3\": rpc error: code = NotFound desc = could not find container \"2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3\": container with ID starting with 2259bad46d23330fbad39805fb4417e0a17c442ae22a83bc75435ba69ed4e5b3 not found: ID does not exist" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.128710 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb173889-a078-4f05-b1d5-1a805ee8336e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.484203 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.489480 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5m2z"] Jan 26 15:47:30 crc kubenswrapper[4896]: I0126 15:47:30.766476 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" path="/var/lib/kubelet/pods/fb173889-a078-4f05-b1d5-1a805ee8336e/volumes" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.758910 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.761097 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.760774 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.761869 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.762264 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.763399 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" Jan 26 15:47:38 crc kubenswrapper[4896]: I0126 15:47:38.984738 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b58h7"] Jan 26 15:47:38 crc kubenswrapper[4896]: W0126 15:47:38.989217 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e44697_1997_402b_939f_641cb2f74176.slice/crio-56e1a81eb28a82ef895c8c9f82293286a5aab250d86341b85fe0e450cfb31a71 WatchSource:0}: Error finding container 56e1a81eb28a82ef895c8c9f82293286a5aab250d86341b85fe0e450cfb31a71: Status 404 returned error can't find the container with id 56e1a81eb28a82ef895c8c9f82293286a5aab250d86341b85fe0e450cfb31a71 Jan 26 15:47:39 crc kubenswrapper[4896]: I0126 15:47:39.035055 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" event={"ID":"39e44697-1997-402b-939f-641cb2f74176","Type":"ContainerStarted","Data":"56e1a81eb28a82ef895c8c9f82293286a5aab250d86341b85fe0e450cfb31a71"} Jan 26 15:47:39 crc kubenswrapper[4896]: I0126 15:47:39.254234 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm"] Jan 26 15:47:39 crc kubenswrapper[4896]: I0126 15:47:39.266247 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn"] Jan 26 15:47:39 crc kubenswrapper[4896]: W0126 15:47:39.273492 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6db188e_bd3c_49e5_800c_2f6706ca8b45.slice/crio-d9ad5d1db9a185dda21b5d78f4179affad9b39c6e0725d557b2161a7d6fc9172 WatchSource:0}: Error finding container d9ad5d1db9a185dda21b5d78f4179affad9b39c6e0725d557b2161a7d6fc9172: Status 404 returned error can't find the container with id d9ad5d1db9a185dda21b5d78f4179affad9b39c6e0725d557b2161a7d6fc9172 Jan 26 15:47:39 crc kubenswrapper[4896]: W0126 15:47:39.296904 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod482717fd_6a21_44de_a4d1_e08d5324552b.slice/crio-a948c0d07f9b10cdc9476101e94d9231ec07a2f13123199506fa8c45517c6c3e WatchSource:0}: Error finding container a948c0d07f9b10cdc9476101e94d9231ec07a2f13123199506fa8c45517c6c3e: Status 404 returned error can't find the container with id a948c0d07f9b10cdc9476101e94d9231ec07a2f13123199506fa8c45517c6c3e Jan 26 15:47:40 crc kubenswrapper[4896]: I0126 15:47:40.043887 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" event={"ID":"482717fd-6a21-44de-a4d1-e08d5324552b","Type":"ContainerStarted","Data":"a948c0d07f9b10cdc9476101e94d9231ec07a2f13123199506fa8c45517c6c3e"} Jan 26 15:47:40 crc kubenswrapper[4896]: I0126 15:47:40.045492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" event={"ID":"d6db188e-bd3c-49e5-800c-2f6706ca8b45","Type":"ContainerStarted","Data":"d9ad5d1db9a185dda21b5d78f4179affad9b39c6e0725d557b2161a7d6fc9172"} Jan 26 15:47:40 crc kubenswrapper[4896]: I0126 15:47:40.758489 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:40 crc kubenswrapper[4896]: I0126 15:47:40.759058 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:41 crc kubenswrapper[4896]: I0126 15:47:41.250369 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s6xcx"] Jan 26 15:47:41 crc kubenswrapper[4896]: I0126 15:47:41.759071 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:41 crc kubenswrapper[4896]: I0126 15:47:41.759908 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" Jan 26 15:47:45 crc kubenswrapper[4896]: W0126 15:47:45.086663 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d40f6e6_fe99_4a02_b499_83c0b6a61706.slice/crio-78a535ea036a21460996d51ef6c51c5e6c6b47563baa085c0799c23eb3d7bbfb WatchSource:0}: Error finding container 78a535ea036a21460996d51ef6c51c5e6c6b47563baa085c0799c23eb3d7bbfb: Status 404 returned error can't find the container with id 78a535ea036a21460996d51ef6c51c5e6c6b47563baa085c0799c23eb3d7bbfb Jan 26 15:47:46 crc kubenswrapper[4896]: I0126 15:47:46.093523 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" event={"ID":"4d40f6e6-fe99-4a02-b499-83c0b6a61706","Type":"ContainerStarted","Data":"78a535ea036a21460996d51ef6c51c5e6c6b47563baa085c0799c23eb3d7bbfb"} Jan 26 15:47:47 crc kubenswrapper[4896]: I0126 15:47:47.714705 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb"] Jan 26 15:47:47 crc kubenswrapper[4896]: W0126 15:47:47.735017 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d1c2692_33ed_45a8_9fed_a2c9eb1b5212.slice/crio-921b0d4ade99f1d2cb4d427652683c388755cf0944f3efa7eb797055b6a96244 WatchSource:0}: Error finding container 921b0d4ade99f1d2cb4d427652683c388755cf0944f3efa7eb797055b6a96244: Status 404 returned error can't find the container with id 921b0d4ade99f1d2cb4d427652683c388755cf0944f3efa7eb797055b6a96244 Jan 26 15:47:47 crc kubenswrapper[4896]: I0126 15:47:47.940285 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5pvz8" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.334363 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" event={"ID":"482717fd-6a21-44de-a4d1-e08d5324552b","Type":"ContainerStarted","Data":"dcf0b9ba0f9f4f20bac03a0be4c76ba57abc92a32e7fb519756c726ad514c282"} Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.336391 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" event={"ID":"d6db188e-bd3c-49e5-800c-2f6706ca8b45","Type":"ContainerStarted","Data":"c4db3bd25b9e30b6c49361ea1ca4405b2f1f54b1a08f99a1ac9783a59ce7a08e"} Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.338816 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" event={"ID":"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212","Type":"ContainerStarted","Data":"c069d0e3000f90edf2e86d655d3ae9a88111e03130a6571c04e3aeadb1da7d06"} Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.339040 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" event={"ID":"1d1c2692-33ed-45a8-9fed-a2c9eb1b5212","Type":"ContainerStarted","Data":"921b0d4ade99f1d2cb4d427652683c388755cf0944f3efa7eb797055b6a96244"} Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.340386 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" event={"ID":"39e44697-1997-402b-939f-641cb2f74176","Type":"ContainerStarted","Data":"814c6b78b38190e603d344901004fec9f1e69a100f29dc4c069a7cc9bf067af5"} Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.340659 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.360761 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm" podStartSLOduration=17.453240388 podStartE2EDuration="25.360721133s" podCreationTimestamp="2026-01-26 15:47:23 +0000 UTC" firstStartedPulling="2026-01-26 15:47:39.300771683 +0000 UTC m=+817.082652076" lastFinishedPulling="2026-01-26 15:47:47.208252428 +0000 UTC m=+824.990132821" observedRunningTime="2026-01-26 15:47:48.352102451 +0000 UTC m=+826.133982844" watchObservedRunningTime="2026-01-26 15:47:48.360721133 +0000 UTC m=+826.142601526" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.413135 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb" podStartSLOduration=25.411944542 podStartE2EDuration="25.411944542s" podCreationTimestamp="2026-01-26 15:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:47:48.37968514 +0000 UTC m=+826.161565533" watchObservedRunningTime="2026-01-26 15:47:48.411944542 +0000 UTC m=+826.193824935" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.416357 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sjfrn" podStartSLOduration=17.498673004 podStartE2EDuration="25.41634438s" podCreationTimestamp="2026-01-26 15:47:23 +0000 UTC" firstStartedPulling="2026-01-26 15:47:39.27623604 +0000 UTC m=+817.058116433" lastFinishedPulling="2026-01-26 15:47:47.193907416 +0000 UTC m=+824.975787809" observedRunningTime="2026-01-26 15:47:48.411166763 +0000 UTC m=+826.193047156" watchObservedRunningTime="2026-01-26 15:47:48.41634438 +0000 UTC m=+826.198224773" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.440136 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" podStartSLOduration=17.243648927 podStartE2EDuration="25.440117375s" podCreationTimestamp="2026-01-26 15:47:23 +0000 UTC" firstStartedPulling="2026-01-26 15:47:38.995923521 +0000 UTC m=+816.777803914" lastFinishedPulling="2026-01-26 15:47:47.192391969 +0000 UTC m=+824.974272362" observedRunningTime="2026-01-26 15:47:48.440019652 +0000 UTC m=+826.221900055" watchObservedRunningTime="2026-01-26 15:47:48.440117375 +0000 UTC m=+826.221997768" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.772611 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-b58h7" Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.814322 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:47:48 crc kubenswrapper[4896]: I0126 15:47:48.814390 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:47:50 crc kubenswrapper[4896]: I0126 15:47:50.358380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" event={"ID":"4d40f6e6-fe99-4a02-b499-83c0b6a61706","Type":"ContainerStarted","Data":"6c9249bf8ad86b52d3ff7e1dd71dcfdf4b85e8c36870d391b7405e17147df9f1"} Jan 26 15:47:50 crc kubenswrapper[4896]: I0126 15:47:50.359808 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:47:50 crc kubenswrapper[4896]: I0126 15:47:50.386386 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" podStartSLOduration=23.278893629 podStartE2EDuration="27.386352788s" podCreationTimestamp="2026-01-26 15:47:23 +0000 UTC" firstStartedPulling="2026-01-26 15:47:45.093413782 +0000 UTC m=+822.875294175" lastFinishedPulling="2026-01-26 15:47:49.200872941 +0000 UTC m=+826.982753334" observedRunningTime="2026-01-26 15:47:50.380167336 +0000 UTC m=+828.162047749" watchObservedRunningTime="2026-01-26 15:47:50.386352788 +0000 UTC m=+828.168233181" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.975339 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s"] Jan 26 15:47:59 crc kubenswrapper[4896]: E0126 15:47:59.976213 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="extract-content" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.976236 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="extract-content" Jan 26 15:47:59 crc kubenswrapper[4896]: E0126 15:47:59.976255 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="extract-utilities" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.976267 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="extract-utilities" Jan 26 15:47:59 crc kubenswrapper[4896]: E0126 15:47:59.976277 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="registry-server" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.976285 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="registry-server" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.976408 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb173889-a078-4f05-b1d5-1a805ee8336e" containerName="registry-server" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.976951 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.983405 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8drkb"] Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.984137 4896 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-psd85" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.984296 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.984480 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.987926 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8drkb" Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.992057 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s"] Jan 26 15:47:59 crc kubenswrapper[4896]: I0126 15:47:59.994110 4896 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-td9gh" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:47:59.999966 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8drkb"] Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.017674 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k7ctr"] Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.018649 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.026609 4896 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9b24x" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.041643 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k7ctr"] Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.091537 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxdh\" (UniqueName: \"kubernetes.io/projected/6b19c675-ac2e-4855-8368-79f9812f6a86-kube-api-access-6zxdh\") pod \"cert-manager-webhook-687f57d79b-k7ctr\" (UID: \"6b19c675-ac2e-4855-8368-79f9812f6a86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.091677 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrp5\" (UniqueName: \"kubernetes.io/projected/d971d7ab-8017-45d5-9802-17b6b699464e-kube-api-access-kvrp5\") pod \"cert-manager-858654f9db-8drkb\" (UID: \"d971d7ab-8017-45d5-9802-17b6b699464e\") " pod="cert-manager/cert-manager-858654f9db-8drkb" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.091744 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2mx\" (UniqueName: \"kubernetes.io/projected/8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c-kube-api-access-bs2mx\") pod \"cert-manager-cainjector-cf98fcc89-5rx5s\" (UID: \"8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.195947 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2mx\" (UniqueName: \"kubernetes.io/projected/8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c-kube-api-access-bs2mx\") pod \"cert-manager-cainjector-cf98fcc89-5rx5s\" (UID: \"8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.196327 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxdh\" (UniqueName: \"kubernetes.io/projected/6b19c675-ac2e-4855-8368-79f9812f6a86-kube-api-access-6zxdh\") pod \"cert-manager-webhook-687f57d79b-k7ctr\" (UID: \"6b19c675-ac2e-4855-8368-79f9812f6a86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.196530 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvrp5\" (UniqueName: \"kubernetes.io/projected/d971d7ab-8017-45d5-9802-17b6b699464e-kube-api-access-kvrp5\") pod \"cert-manager-858654f9db-8drkb\" (UID: \"d971d7ab-8017-45d5-9802-17b6b699464e\") " pod="cert-manager/cert-manager-858654f9db-8drkb" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.226766 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxdh\" (UniqueName: \"kubernetes.io/projected/6b19c675-ac2e-4855-8368-79f9812f6a86-kube-api-access-6zxdh\") pod \"cert-manager-webhook-687f57d79b-k7ctr\" (UID: \"6b19c675-ac2e-4855-8368-79f9812f6a86\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.229545 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvrp5\" (UniqueName: \"kubernetes.io/projected/d971d7ab-8017-45d5-9802-17b6b699464e-kube-api-access-kvrp5\") pod \"cert-manager-858654f9db-8drkb\" (UID: \"d971d7ab-8017-45d5-9802-17b6b699464e\") " pod="cert-manager/cert-manager-858654f9db-8drkb" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.233421 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2mx\" (UniqueName: \"kubernetes.io/projected/8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c-kube-api-access-bs2mx\") pod \"cert-manager-cainjector-cf98fcc89-5rx5s\" (UID: \"8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.306164 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.316907 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8drkb" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.343692 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:00 crc kubenswrapper[4896]: I0126 15:48:00.935677 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k7ctr"] Jan 26 15:48:00 crc kubenswrapper[4896]: W0126 15:48:00.936630 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b19c675_ac2e_4855_8368_79f9812f6a86.slice/crio-4158d2598c0cccccb67b5924f13b79993cd8bc77b065b121bd94d4e04eafb1df WatchSource:0}: Error finding container 4158d2598c0cccccb67b5924f13b79993cd8bc77b065b121bd94d4e04eafb1df: Status 404 returned error can't find the container with id 4158d2598c0cccccb67b5924f13b79993cd8bc77b065b121bd94d4e04eafb1df Jan 26 15:48:01 crc kubenswrapper[4896]: I0126 15:48:01.100656 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s"] Jan 26 15:48:01 crc kubenswrapper[4896]: I0126 15:48:01.102863 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8drkb"] Jan 26 15:48:01 crc kubenswrapper[4896]: W0126 15:48:01.103446 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6d49e2_282c_476d_8ce8_8bff3b7fbc6c.slice/crio-a4e0d46ea5bdb1b13e9cb002c60f88920658c6f33fb402828634bdb22d364fd6 WatchSource:0}: Error finding container a4e0d46ea5bdb1b13e9cb002c60f88920658c6f33fb402828634bdb22d364fd6: Status 404 returned error can't find the container with id a4e0d46ea5bdb1b13e9cb002c60f88920658c6f33fb402828634bdb22d364fd6 Jan 26 15:48:01 crc kubenswrapper[4896]: W0126 15:48:01.104213 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd971d7ab_8017_45d5_9802_17b6b699464e.slice/crio-f4865937a426c33cc766b4e5be017a3f5110953d7850fa2f08a2f7f0499da5f2 WatchSource:0}: Error finding container f4865937a426c33cc766b4e5be017a3f5110953d7850fa2f08a2f7f0499da5f2: Status 404 returned error can't find the container with id f4865937a426c33cc766b4e5be017a3f5110953d7850fa2f08a2f7f0499da5f2 Jan 26 15:48:01 crc kubenswrapper[4896]: I0126 15:48:01.425884 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8drkb" event={"ID":"d971d7ab-8017-45d5-9802-17b6b699464e","Type":"ContainerStarted","Data":"f4865937a426c33cc766b4e5be017a3f5110953d7850fa2f08a2f7f0499da5f2"} Jan 26 15:48:01 crc kubenswrapper[4896]: I0126 15:48:01.426933 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" event={"ID":"8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c","Type":"ContainerStarted","Data":"a4e0d46ea5bdb1b13e9cb002c60f88920658c6f33fb402828634bdb22d364fd6"} Jan 26 15:48:01 crc kubenswrapper[4896]: I0126 15:48:01.428075 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" event={"ID":"6b19c675-ac2e-4855-8368-79f9812f6a86","Type":"ContainerStarted","Data":"4158d2598c0cccccb67b5924f13b79993cd8bc77b065b121bd94d4e04eafb1df"} Jan 26 15:48:03 crc kubenswrapper[4896]: I0126 15:48:03.947087 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-s6xcx" Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.571543 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" event={"ID":"6b19c675-ac2e-4855-8368-79f9812f6a86","Type":"ContainerStarted","Data":"33020a028f49a3b25522f5d6c80bb0ea159897bde06cb6960211b4caa5cf79aa"} Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.573305 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.575170 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8drkb" event={"ID":"d971d7ab-8017-45d5-9802-17b6b699464e","Type":"ContainerStarted","Data":"baff8114a1d720ac992e14192954d8addb238a44f9d2277769220b11742c3195"} Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.577623 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" event={"ID":"8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c","Type":"ContainerStarted","Data":"f69cda1b09dbf8a2969df6b181e8ad86c2a95b3e40305b75a629666ece690a68"} Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.589626 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" podStartSLOduration=2.178148871 podStartE2EDuration="14.589608155s" podCreationTimestamp="2026-01-26 15:47:59 +0000 UTC" firstStartedPulling="2026-01-26 15:48:00.938547872 +0000 UTC m=+838.720428265" lastFinishedPulling="2026-01-26 15:48:13.350007156 +0000 UTC m=+851.131887549" observedRunningTime="2026-01-26 15:48:13.588534078 +0000 UTC m=+851.370414481" watchObservedRunningTime="2026-01-26 15:48:13.589608155 +0000 UTC m=+851.371488548" Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.609687 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8drkb" podStartSLOduration=2.361046484 podStartE2EDuration="14.609662026s" podCreationTimestamp="2026-01-26 15:47:59 +0000 UTC" firstStartedPulling="2026-01-26 15:48:01.109835701 +0000 UTC m=+838.891716094" lastFinishedPulling="2026-01-26 15:48:13.358451243 +0000 UTC m=+851.140331636" observedRunningTime="2026-01-26 15:48:13.60370937 +0000 UTC m=+851.385589773" watchObservedRunningTime="2026-01-26 15:48:13.609662026 +0000 UTC m=+851.391542429" Jan 26 15:48:13 crc kubenswrapper[4896]: I0126 15:48:13.624638 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5rx5s" podStartSLOduration=2.376176794 podStartE2EDuration="14.624616602s" podCreationTimestamp="2026-01-26 15:47:59 +0000 UTC" firstStartedPulling="2026-01-26 15:48:01.105659118 +0000 UTC m=+838.887539512" lastFinishedPulling="2026-01-26 15:48:13.354098927 +0000 UTC m=+851.135979320" observedRunningTime="2026-01-26 15:48:13.622655714 +0000 UTC m=+851.404536107" watchObservedRunningTime="2026-01-26 15:48:13.624616602 +0000 UTC m=+851.406497005" Jan 26 15:48:18 crc kubenswrapper[4896]: I0126 15:48:18.814161 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:48:18 crc kubenswrapper[4896]: I0126 15:48:18.816515 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:48:18 crc kubenswrapper[4896]: I0126 15:48:18.816699 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:48:18 crc kubenswrapper[4896]: I0126 15:48:18.817598 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:48:18 crc kubenswrapper[4896]: I0126 15:48:18.817737 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775" gracePeriod=600 Jan 26 15:48:19 crc kubenswrapper[4896]: I0126 15:48:19.637438 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775" exitCode=0 Jan 26 15:48:19 crc kubenswrapper[4896]: I0126 15:48:19.637527 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775"} Jan 26 15:48:19 crc kubenswrapper[4896]: I0126 15:48:19.638033 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f"} Jan 26 15:48:19 crc kubenswrapper[4896]: I0126 15:48:19.638068 4896 scope.go:117] "RemoveContainer" containerID="496057d39cfd4c97dbde27dcc7921f95da5628ae998305077952ca62cba7a8c1" Jan 26 15:48:20 crc kubenswrapper[4896]: I0126 15:48:20.347240 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.579508 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q"] Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.581294 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.584438 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.589703 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q"] Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.686628 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwq2\" (UniqueName: \"kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.686698 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.686721 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.784746 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7"] Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.788935 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.789066 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kwq2\" (UniqueName: \"kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.789623 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.789693 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.789735 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.790063 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.809017 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7"] Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.824214 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kwq2\" (UniqueName: \"kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.890795 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.890934 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799hr\" (UniqueName: \"kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.891079 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.909829 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.993065 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799hr\" (UniqueName: \"kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.993153 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.993194 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.993623 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:49 crc kubenswrapper[4896]: I0126 15:48:49.993871 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.010602 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799hr\" (UniqueName: \"kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.125360 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.186884 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q"] Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.676024 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7"] Jan 26 15:48:50 crc kubenswrapper[4896]: W0126 15:48:50.684092 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78b1f724_4f81_4571_8f81_9170eb54e5d1.slice/crio-8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6 WatchSource:0}: Error finding container 8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6: Status 404 returned error can't find the container with id 8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6 Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.853698 4896 generic.go:334] "Generic (PLEG): container finished" podID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerID="288ebe01c57b338a4a12598f87edd1fbcd6eb888033c5d9aa9bc1f888a83a885" exitCode=0 Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.853782 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" event={"ID":"c5c39f79-1a4d-45f3-96b2-e409562cdf14","Type":"ContainerDied","Data":"288ebe01c57b338a4a12598f87edd1fbcd6eb888033c5d9aa9bc1f888a83a885"} Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.853816 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" event={"ID":"c5c39f79-1a4d-45f3-96b2-e409562cdf14","Type":"ContainerStarted","Data":"63c18bbab4c3fca1825629ea3d7d13fe0f0a0bb8bedd5efdd0e1de8828a638dc"} Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.856391 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerStarted","Data":"7da3c1a1e242c85335d57dc762c4b6c0a518eefa2ee976ddc4cbc235874c408f"} Jan 26 15:48:50 crc kubenswrapper[4896]: I0126 15:48:50.856420 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerStarted","Data":"8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6"} Jan 26 15:48:51 crc kubenswrapper[4896]: I0126 15:48:51.866530 4896 generic.go:334] "Generic (PLEG): container finished" podID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerID="7da3c1a1e242c85335d57dc762c4b6c0a518eefa2ee976ddc4cbc235874c408f" exitCode=0 Jan 26 15:48:51 crc kubenswrapper[4896]: I0126 15:48:51.866624 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerDied","Data":"7da3c1a1e242c85335d57dc762c4b6c0a518eefa2ee976ddc4cbc235874c408f"} Jan 26 15:48:53 crc kubenswrapper[4896]: I0126 15:48:53.887667 4896 generic.go:334] "Generic (PLEG): container finished" podID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerID="c8358213090e6804975a022f11d9e6b086b5e105d7b44c8ed5d8e19d6347657f" exitCode=0 Jan 26 15:48:53 crc kubenswrapper[4896]: I0126 15:48:53.887728 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" event={"ID":"c5c39f79-1a4d-45f3-96b2-e409562cdf14","Type":"ContainerDied","Data":"c8358213090e6804975a022f11d9e6b086b5e105d7b44c8ed5d8e19d6347657f"} Jan 26 15:48:53 crc kubenswrapper[4896]: I0126 15:48:53.890300 4896 generic.go:334] "Generic (PLEG): container finished" podID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerID="2345c4509ace9e14164659cc89aa7c3be5638321525af49a17c37bebb844fcbf" exitCode=0 Jan 26 15:48:53 crc kubenswrapper[4896]: I0126 15:48:53.890329 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerDied","Data":"2345c4509ace9e14164659cc89aa7c3be5638321525af49a17c37bebb844fcbf"} Jan 26 15:48:54 crc kubenswrapper[4896]: I0126 15:48:54.899145 4896 generic.go:334] "Generic (PLEG): container finished" podID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerID="cd1ae3b63b86d047ed92e7ef175ef4004db7890862ca8f8a73a4d61b2b86ce40" exitCode=0 Jan 26 15:48:54 crc kubenswrapper[4896]: I0126 15:48:54.899250 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerDied","Data":"cd1ae3b63b86d047ed92e7ef175ef4004db7890862ca8f8a73a4d61b2b86ce40"} Jan 26 15:48:54 crc kubenswrapper[4896]: I0126 15:48:54.901836 4896 generic.go:334] "Generic (PLEG): container finished" podID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerID="5854e98366ce34404361b6b257c753c227cb896ce41a72c5e50163372e17551b" exitCode=0 Jan 26 15:48:54 crc kubenswrapper[4896]: I0126 15:48:54.901866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" event={"ID":"c5c39f79-1a4d-45f3-96b2-e409562cdf14","Type":"ContainerDied","Data":"5854e98366ce34404361b6b257c753c227cb896ce41a72c5e50163372e17551b"} Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.343795 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.424289 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455278 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-799hr\" (UniqueName: \"kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr\") pod \"78b1f724-4f81-4571-8f81-9170eb54e5d1\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455362 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util\") pod \"78b1f724-4f81-4571-8f81-9170eb54e5d1\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455395 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle\") pod \"78b1f724-4f81-4571-8f81-9170eb54e5d1\" (UID: \"78b1f724-4f81-4571-8f81-9170eb54e5d1\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455431 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util\") pod \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455509 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle\") pod \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.455541 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kwq2\" (UniqueName: \"kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2\") pod \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\" (UID: \"c5c39f79-1a4d-45f3-96b2-e409562cdf14\") " Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.456827 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle" (OuterVolumeSpecName: "bundle") pod "c5c39f79-1a4d-45f3-96b2-e409562cdf14" (UID: "c5c39f79-1a4d-45f3-96b2-e409562cdf14"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.457596 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle" (OuterVolumeSpecName: "bundle") pod "78b1f724-4f81-4571-8f81-9170eb54e5d1" (UID: "78b1f724-4f81-4571-8f81-9170eb54e5d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.461483 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2" (OuterVolumeSpecName: "kube-api-access-8kwq2") pod "c5c39f79-1a4d-45f3-96b2-e409562cdf14" (UID: "c5c39f79-1a4d-45f3-96b2-e409562cdf14"). InnerVolumeSpecName "kube-api-access-8kwq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.464506 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr" (OuterVolumeSpecName: "kube-api-access-799hr") pod "78b1f724-4f81-4571-8f81-9170eb54e5d1" (UID: "78b1f724-4f81-4571-8f81-9170eb54e5d1"). InnerVolumeSpecName "kube-api-access-799hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.467013 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util" (OuterVolumeSpecName: "util") pod "c5c39f79-1a4d-45f3-96b2-e409562cdf14" (UID: "c5c39f79-1a4d-45f3-96b2-e409562cdf14"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.476878 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util" (OuterVolumeSpecName: "util") pod "78b1f724-4f81-4571-8f81-9170eb54e5d1" (UID: "78b1f724-4f81-4571-8f81-9170eb54e5d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557813 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557862 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78b1f724-4f81-4571-8f81-9170eb54e5d1-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557873 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557885 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5c39f79-1a4d-45f3-96b2-e409562cdf14-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557900 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kwq2\" (UniqueName: \"kubernetes.io/projected/c5c39f79-1a4d-45f3-96b2-e409562cdf14-kube-api-access-8kwq2\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.557914 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-799hr\" (UniqueName: \"kubernetes.io/projected/78b1f724-4f81-4571-8f81-9170eb54e5d1-kube-api-access-799hr\") on node \"crc\" DevicePath \"\"" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.921328 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" event={"ID":"c5c39f79-1a4d-45f3-96b2-e409562cdf14","Type":"ContainerDied","Data":"63c18bbab4c3fca1825629ea3d7d13fe0f0a0bb8bedd5efdd0e1de8828a638dc"} Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.921375 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c18bbab4c3fca1825629ea3d7d13fe0f0a0bb8bedd5efdd0e1de8828a638dc" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.921386 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.923608 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" event={"ID":"78b1f724-4f81-4571-8f81-9170eb54e5d1","Type":"ContainerDied","Data":"8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6"} Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.923631 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8658c81137751aa0bcdf157cc8b2b212f397409dc1863abae688dfda7e0b34c6" Jan 26 15:48:56 crc kubenswrapper[4896]: I0126 15:48:56.923661 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402034 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv"] Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402787 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402799 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402814 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="util" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402820 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="util" Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402829 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="pull" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402835 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="pull" Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402844 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402849 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402863 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="pull" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402870 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="pull" Jan 26 15:49:05 crc kubenswrapper[4896]: E0126 15:49:05.402880 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="util" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402887 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="util" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402981 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b1f724-4f81-4571-8f81-9170eb54e5d1" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.402998 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c39f79-1a4d-45f3-96b2-e409562cdf14" containerName="extract" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.403650 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.405938 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.406288 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.406322 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-g5zfb" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.406935 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.410117 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.410204 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.432881 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv"] Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.478651 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-apiservice-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.478755 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.478860 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-manager-config\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.479056 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lk72\" (UniqueName: \"kubernetes.io/projected/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-kube-api-access-9lk72\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.479099 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-webhook-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.580840 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-webhook-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.580922 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-apiservice-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.580967 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.580990 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-manager-config\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.581052 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lk72\" (UniqueName: \"kubernetes.io/projected/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-kube-api-access-9lk72\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.582312 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-manager-config\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.593684 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-apiservice-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.602415 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.608235 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-webhook-cert\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.635602 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lk72\" (UniqueName: \"kubernetes.io/projected/dce71be2-915b-4c8e-9a4e-ebe6c278ddcf-kube-api-access-9lk72\") pod \"loki-operator-controller-manager-6575bc9f47-rkmnv\" (UID: \"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:05 crc kubenswrapper[4896]: I0126 15:49:05.723082 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:06 crc kubenswrapper[4896]: I0126 15:49:06.301351 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv"] Jan 26 15:49:06 crc kubenswrapper[4896]: I0126 15:49:06.986727 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" event={"ID":"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf","Type":"ContainerStarted","Data":"f7f6bf534fe3048e815b6d4760c6893b8973e19b07222e8bb634ed54b22a9747"} Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.969431 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq"] Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.970763 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.972847 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-bpw6z" Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.974057 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.975866 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 26 15:49:09 crc kubenswrapper[4896]: I0126 15:49:09.986198 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq"] Jan 26 15:49:10 crc kubenswrapper[4896]: I0126 15:49:10.172211 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6w8z\" (UniqueName: \"kubernetes.io/projected/248dd691-612f-4480-8673-4446257df703-kube-api-access-z6w8z\") pod \"cluster-logging-operator-79cf69ddc8-mllkq\" (UID: \"248dd691-612f-4480-8673-4446257df703\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" Jan 26 15:49:10 crc kubenswrapper[4896]: I0126 15:49:10.273345 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6w8z\" (UniqueName: \"kubernetes.io/projected/248dd691-612f-4480-8673-4446257df703-kube-api-access-z6w8z\") pod \"cluster-logging-operator-79cf69ddc8-mllkq\" (UID: \"248dd691-612f-4480-8673-4446257df703\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" Jan 26 15:49:10 crc kubenswrapper[4896]: I0126 15:49:10.294464 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6w8z\" (UniqueName: \"kubernetes.io/projected/248dd691-612f-4480-8673-4446257df703-kube-api-access-z6w8z\") pod \"cluster-logging-operator-79cf69ddc8-mllkq\" (UID: \"248dd691-612f-4480-8673-4446257df703\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" Jan 26 15:49:10 crc kubenswrapper[4896]: I0126 15:49:10.592024 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" Jan 26 15:49:12 crc kubenswrapper[4896]: I0126 15:49:12.493402 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq"] Jan 26 15:49:12 crc kubenswrapper[4896]: W0126 15:49:12.506564 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod248dd691_612f_4480_8673_4446257df703.slice/crio-6eed92be7ed4635cfbb5c8f65e9c1861e8a372b4963fd61ddd39ec34f3d95479 WatchSource:0}: Error finding container 6eed92be7ed4635cfbb5c8f65e9c1861e8a372b4963fd61ddd39ec34f3d95479: Status 404 returned error can't find the container with id 6eed92be7ed4635cfbb5c8f65e9c1861e8a372b4963fd61ddd39ec34f3d95479 Jan 26 15:49:13 crc kubenswrapper[4896]: I0126 15:49:13.042672 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" event={"ID":"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf","Type":"ContainerStarted","Data":"28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80"} Jan 26 15:49:13 crc kubenswrapper[4896]: I0126 15:49:13.043899 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" event={"ID":"248dd691-612f-4480-8673-4446257df703","Type":"ContainerStarted","Data":"6eed92be7ed4635cfbb5c8f65e9c1861e8a372b4963fd61ddd39ec34f3d95479"} Jan 26 15:49:26 crc kubenswrapper[4896]: E0126 15:49:26.917854 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator@sha256:80cac88d8ff5b40036e5983f5dacfc08702afe9c7a66b48d1c88bcb149c285b3" Jan 26 15:49:26 crc kubenswrapper[4896]: E0126 15:49:26.918653 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cluster-logging-operator,Image:registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator@sha256:80cac88d8ff5b40036e5983f5dacfc08702afe9c7a66b48d1c88bcb149c285b3,Command:[cluster-logging-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-logging-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_VECTOR,Value:registry.redhat.io/openshift-logging/vector-rhel9@sha256:fa2cfa2ed336ce105c8dea5bfe0825407e37ef296193ae162f515213fe43c8d5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_LOG_FILE_METRIC_EXPORTER,Value:registry.redhat.io/openshift-logging/log-file-metric-exporter-rhel9@sha256:0d2edaf37f5e25155f9a3086e81d40686b102a78c3ae35b07e0c5992d3a7fb40,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-logging.v6.2.7,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6w8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-logging-operator-79cf69ddc8-mllkq_openshift-logging(248dd691-612f-4480-8673-4446257df703): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:49:26 crc kubenswrapper[4896]: E0126 15:49:26.920774 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-logging-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" podUID="248dd691-612f-4480-8673-4446257df703" Jan 26 15:49:27 crc kubenswrapper[4896]: I0126 15:49:27.409427 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" event={"ID":"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf","Type":"ContainerStarted","Data":"c59a24ec882d35c1ed6d9cadd28d8c41914984ea2b62df32270a084941a73464"} Jan 26 15:49:27 crc kubenswrapper[4896]: I0126 15:49:27.410088 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:27 crc kubenswrapper[4896]: E0126 15:49:27.410097 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-logging-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator@sha256:80cac88d8ff5b40036e5983f5dacfc08702afe9c7a66b48d1c88bcb149c285b3\\\"\"" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" podUID="248dd691-612f-4480-8673-4446257df703" Jan 26 15:49:27 crc kubenswrapper[4896]: I0126 15:49:27.415742 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 15:49:27 crc kubenswrapper[4896]: I0126 15:49:27.452315 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" podStartSLOduration=1.803793945 podStartE2EDuration="22.452296929s" podCreationTimestamp="2026-01-26 15:49:05 +0000 UTC" firstStartedPulling="2026-01-26 15:49:06.310764124 +0000 UTC m=+904.092644517" lastFinishedPulling="2026-01-26 15:49:26.959267108 +0000 UTC m=+924.741147501" observedRunningTime="2026-01-26 15:49:27.446682973 +0000 UTC m=+925.228563376" watchObservedRunningTime="2026-01-26 15:49:27.452296929 +0000 UTC m=+925.234177322" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.450208 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.451971 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.473554 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.476371 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.476404 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lrm\" (UniqueName: \"kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.476560 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.578041 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.578111 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lrm\" (UniqueName: \"kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.578167 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.578630 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.578755 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.602926 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lrm\" (UniqueName: \"kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm\") pod \"community-operators-882dt\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:33 crc kubenswrapper[4896]: I0126 15:49:33.772052 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:34 crc kubenswrapper[4896]: I0126 15:49:34.422315 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:34 crc kubenswrapper[4896]: I0126 15:49:34.455534 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerStarted","Data":"354d90954139e850170223ed73aa48717709b9fc4e9c35cafca227a1242d249d"} Jan 26 15:49:35 crc kubenswrapper[4896]: I0126 15:49:35.466903 4896 generic.go:334] "Generic (PLEG): container finished" podID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerID="3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a" exitCode=0 Jan 26 15:49:35 crc kubenswrapper[4896]: I0126 15:49:35.466958 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerDied","Data":"3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a"} Jan 26 15:49:37 crc kubenswrapper[4896]: I0126 15:49:37.484305 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerStarted","Data":"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f"} Jan 26 15:49:38 crc kubenswrapper[4896]: I0126 15:49:38.499257 4896 generic.go:334] "Generic (PLEG): container finished" podID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerID="e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f" exitCode=0 Jan 26 15:49:38 crc kubenswrapper[4896]: I0126 15:49:38.499347 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerDied","Data":"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f"} Jan 26 15:49:39 crc kubenswrapper[4896]: I0126 15:49:39.509977 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerStarted","Data":"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23"} Jan 26 15:49:39 crc kubenswrapper[4896]: I0126 15:49:39.695213 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-882dt" podStartSLOduration=3.254558612 podStartE2EDuration="6.695195658s" podCreationTimestamp="2026-01-26 15:49:33 +0000 UTC" firstStartedPulling="2026-01-26 15:49:35.469065101 +0000 UTC m=+933.250945494" lastFinishedPulling="2026-01-26 15:49:38.909702147 +0000 UTC m=+936.691582540" observedRunningTime="2026-01-26 15:49:39.692440892 +0000 UTC m=+937.474321305" watchObservedRunningTime="2026-01-26 15:49:39.695195658 +0000 UTC m=+937.477076051" Jan 26 15:49:42 crc kubenswrapper[4896]: I0126 15:49:42.585117 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" event={"ID":"248dd691-612f-4480-8673-4446257df703","Type":"ContainerStarted","Data":"c3c0c591c87ed72ff3ecbaee47fbe770a6637540e6acef78315254f65d8c9ed8"} Jan 26 15:49:42 crc kubenswrapper[4896]: I0126 15:49:42.614766 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-mllkq" podStartSLOduration=4.043966996 podStartE2EDuration="33.614744665s" podCreationTimestamp="2026-01-26 15:49:09 +0000 UTC" firstStartedPulling="2026-01-26 15:49:12.508513221 +0000 UTC m=+910.290393614" lastFinishedPulling="2026-01-26 15:49:42.07929089 +0000 UTC m=+939.861171283" observedRunningTime="2026-01-26 15:49:42.608013632 +0000 UTC m=+940.389894025" watchObservedRunningTime="2026-01-26 15:49:42.614744665 +0000 UTC m=+940.396625068" Jan 26 15:49:43 crc kubenswrapper[4896]: I0126 15:49:43.772705 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:43 crc kubenswrapper[4896]: I0126 15:49:43.773039 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:43 crc kubenswrapper[4896]: I0126 15:49:43.929979 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:44 crc kubenswrapper[4896]: I0126 15:49:44.656079 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:44 crc kubenswrapper[4896]: I0126 15:49:44.704436 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:46 crc kubenswrapper[4896]: I0126 15:49:46.616797 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-882dt" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="registry-server" containerID="cri-o://1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23" gracePeriod=2 Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.641179 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.643767 4896 generic.go:334] "Generic (PLEG): container finished" podID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerID="1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23" exitCode=0 Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.643818 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerDied","Data":"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23"} Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.643855 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-882dt" event={"ID":"b2ebd2db-7ace-493d-a124-dc82c7bb5d97","Type":"ContainerDied","Data":"354d90954139e850170223ed73aa48717709b9fc4e9c35cafca227a1242d249d"} Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.643875 4896 scope.go:117] "RemoveContainer" containerID="1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.691934 4896 scope.go:117] "RemoveContainer" containerID="e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.789779 4896 scope.go:117] "RemoveContainer" containerID="3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.824517 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content\") pod \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.824708 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities\") pod \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.824771 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7lrm\" (UniqueName: \"kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm\") pod \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\" (UID: \"b2ebd2db-7ace-493d-a124-dc82c7bb5d97\") " Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.826609 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities" (OuterVolumeSpecName: "utilities") pod "b2ebd2db-7ace-493d-a124-dc82c7bb5d97" (UID: "b2ebd2db-7ace-493d-a124-dc82c7bb5d97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.831618 4896 scope.go:117] "RemoveContainer" containerID="1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23" Jan 26 15:49:47 crc kubenswrapper[4896]: E0126 15:49:47.832094 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23\": container with ID starting with 1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23 not found: ID does not exist" containerID="1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832127 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm" (OuterVolumeSpecName: "kube-api-access-k7lrm") pod "b2ebd2db-7ace-493d-a124-dc82c7bb5d97" (UID: "b2ebd2db-7ace-493d-a124-dc82c7bb5d97"). InnerVolumeSpecName "kube-api-access-k7lrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832142 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23"} err="failed to get container status \"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23\": rpc error: code = NotFound desc = could not find container \"1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23\": container with ID starting with 1d40240243ebf221557fa1ae3e862c7ce3fc2086b736d3b7e6dc4e91d96c5e23 not found: ID does not exist" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832210 4896 scope.go:117] "RemoveContainer" containerID="e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f" Jan 26 15:49:47 crc kubenswrapper[4896]: E0126 15:49:47.832478 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f\": container with ID starting with e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f not found: ID does not exist" containerID="e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832507 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f"} err="failed to get container status \"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f\": rpc error: code = NotFound desc = could not find container \"e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f\": container with ID starting with e4a58f6ea26f070ae57903fc4a1f2a94d55388a968fc0b940ca2f9fb51f5096f not found: ID does not exist" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832525 4896 scope.go:117] "RemoveContainer" containerID="3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a" Jan 26 15:49:47 crc kubenswrapper[4896]: E0126 15:49:47.832871 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a\": container with ID starting with 3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a not found: ID does not exist" containerID="3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.832908 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a"} err="failed to get container status \"3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a\": rpc error: code = NotFound desc = could not find container \"3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a\": container with ID starting with 3338fbdd2d134fbc19191fc0bcbc173574501aab9b0e8f4f7092eab7d7b9590a not found: ID does not exist" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.875507 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2ebd2db-7ace-493d-a124-dc82c7bb5d97" (UID: "b2ebd2db-7ace-493d-a124-dc82c7bb5d97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.926641 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.926686 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7lrm\" (UniqueName: \"kubernetes.io/projected/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-kube-api-access-k7lrm\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:47 crc kubenswrapper[4896]: I0126 15:49:47.926699 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ebd2db-7ace-493d-a124-dc82c7bb5d97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.258779 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 26 15:49:48 crc kubenswrapper[4896]: E0126 15:49:48.259100 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="registry-server" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.259126 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="registry-server" Jan 26 15:49:48 crc kubenswrapper[4896]: E0126 15:49:48.259145 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="extract-content" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.259153 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="extract-content" Jan 26 15:49:48 crc kubenswrapper[4896]: E0126 15:49:48.259175 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="extract-utilities" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.259182 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="extract-utilities" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.259386 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" containerName="registry-server" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.259842 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.263513 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.263937 4896 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-2zn8w" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.264158 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.269083 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.433482 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wlp\" (UniqueName: \"kubernetes.io/projected/1c18ff31-91bb-45ed-8c7d-0c2fbfab7827-kube-api-access-q8wlp\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.433748 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.534703 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8wlp\" (UniqueName: \"kubernetes.io/projected/1c18ff31-91bb-45ed-8c7d-0c2fbfab7827-kube-api-access-q8wlp\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.534863 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.538085 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.538134 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d82a458ecb352330e00c1e4a78be43b43f70f66b7e2ce3de8322f568d571b0eb/globalmount\"" pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.554604 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8wlp\" (UniqueName: \"kubernetes.io/projected/1c18ff31-91bb-45ed-8c7d-0c2fbfab7827-kube-api-access-q8wlp\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.570087 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-090c5d3f-01d4-43e9-9ac2-a19874600947\") pod \"minio\" (UID: \"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827\") " pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.581617 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.658374 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-882dt" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.697682 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.704518 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-882dt"] Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.769323 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ebd2db-7ace-493d-a124-dc82c7bb5d97" path="/var/lib/kubelet/pods/b2ebd2db-7ace-493d-a124-dc82c7bb5d97/volumes" Jan 26 15:49:48 crc kubenswrapper[4896]: I0126 15:49:48.802893 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 15:49:49 crc kubenswrapper[4896]: I0126 15:49:49.665742 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827","Type":"ContainerStarted","Data":"3686e87b62057009a32d34764cfe3c72e0c92571b81b774511a75afa67235fd2"} Jan 26 15:49:56 crc kubenswrapper[4896]: I0126 15:49:56.751985 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"1c18ff31-91bb-45ed-8c7d-0c2fbfab7827","Type":"ContainerStarted","Data":"362c92f945fc426ed001583d4714d3115f2830452f3ad9485ae99cf032440422"} Jan 26 15:49:56 crc kubenswrapper[4896]: I0126 15:49:56.773319 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.780096013 podStartE2EDuration="11.773295107s" podCreationTimestamp="2026-01-26 15:49:45 +0000 UTC" firstStartedPulling="2026-01-26 15:49:48.820238893 +0000 UTC m=+946.602119286" lastFinishedPulling="2026-01-26 15:49:55.813437987 +0000 UTC m=+953.595318380" observedRunningTime="2026-01-26 15:49:56.769711041 +0000 UTC m=+954.551591434" watchObservedRunningTime="2026-01-26 15:49:56.773295107 +0000 UTC m=+954.555175500" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.436430 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.460367 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.463990 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.464273 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.466632 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.466852 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.468304 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jcc99" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.471834 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.567154 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkg5n\" (UniqueName: \"kubernetes.io/projected/790beb3d-3eed-4fef-849d-84a13c17f4a7-kube-api-access-pkg5n\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.567209 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.567235 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.567281 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.567301 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-config\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.588462 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-lxv2v"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.589913 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.592661 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.592843 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.593000 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.612444 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-lxv2v"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668742 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgss\" (UniqueName: \"kubernetes.io/projected/39d4db55-bf77-4948-a36b-4e0d4bf056e8-kube-api-access-6xgss\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668798 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668825 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668849 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-s3\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668888 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668906 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-config\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668928 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668958 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.668995 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.669014 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-config\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.669037 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkg5n\" (UniqueName: \"kubernetes.io/projected/790beb3d-3eed-4fef-849d-84a13c17f4a7-kube-api-access-pkg5n\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.671714 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.671890 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.673273 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.673658 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.681053 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-config\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.684934 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.685332 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.686223 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.694091 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.694534 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.696153 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.706607 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkg5n\" (UniqueName: \"kubernetes.io/projected/790beb3d-3eed-4fef-849d-84a13c17f4a7-kube-api-access-pkg5n\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.709454 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.715513 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/790beb3d-3eed-4fef-849d-84a13c17f4a7-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-wxx4s\" (UID: \"790beb3d-3eed-4fef-849d-84a13c17f4a7\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.784444 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.784516 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-config\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.784558 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.785841 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.786245 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.794690 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d4db55-bf77-4948-a36b-4e0d4bf056e8-config\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.804553 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jcc99" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.808185 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811287 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgss\" (UniqueName: \"kubernetes.io/projected/39d4db55-bf77-4948-a36b-4e0d4bf056e8-kube-api-access-6xgss\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811432 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-config\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811492 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-s3\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811634 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811731 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811831 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.811857 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/b0989bb6-640e-4e54-9dc7-940798b9847f-kube-api-access-m8vjn\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.825438 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-s3\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.826468 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.830468 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-thnm6"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.855780 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/39d4db55-bf77-4948-a36b-4e0d4bf056e8-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.861788 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.872618 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.872832 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.872946 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.873043 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.873180 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.875244 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgss\" (UniqueName: \"kubernetes.io/projected/39d4db55-bf77-4948-a36b-4e0d4bf056e8-kube-api-access-6xgss\") pod \"logging-loki-querier-76788598db-lxv2v\" (UID: \"39d4db55-bf77-4948-a36b-4e0d4bf056e8\") " pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.887036 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-thnm6"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.897373 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-fc8ss"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.898527 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.903143 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-qnrct" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.910987 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-fc8ss"] Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.912664 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915016 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915079 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-rbac\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915863 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915936 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915955 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/b0989bb6-640e-4e54-9dc7-940798b9847f-kube-api-access-m8vjn\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.915976 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.916000 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.916022 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.916039 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.919640 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-config\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.919725 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vnr\" (UniqueName: \"kubernetes.io/projected/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-kube-api-access-q2vnr\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.919770 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tls-secret\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.919802 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tenants\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.922059 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.922311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.922496 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0989bb6-640e-4e54-9dc7-940798b9847f-config\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.928753 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b0989bb6-640e-4e54-9dc7-940798b9847f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:02 crc kubenswrapper[4896]: I0126 15:50:02.933425 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vjn\" (UniqueName: \"kubernetes.io/projected/b0989bb6-640e-4e54-9dc7-940798b9847f-kube-api-access-m8vjn\") pod \"logging-loki-query-frontend-69d9546745-ds2pd\" (UID: \"b0989bb6-640e-4e54-9dc7-940798b9847f\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021507 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tls-secret\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021541 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tenants\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021565 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021619 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xg4q\" (UniqueName: \"kubernetes.io/projected/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-kube-api-access-6xg4q\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021646 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021662 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-rbac\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021694 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tls-secret\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021728 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-rbac\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tenants\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021768 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021797 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021823 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021844 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021880 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021905 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.021921 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vnr\" (UniqueName: \"kubernetes.io/projected/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-kube-api-access-q2vnr\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.037478 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tls-secret\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.042708 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-tenants\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.043898 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.044693 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-rbac\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.045476 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.049923 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.050823 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.054633 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vnr\" (UniqueName: \"kubernetes.io/projected/9ef5e225-61d8-4ca8-9bc1-43e583ad71be-kube-api-access-q2vnr\") pod \"logging-loki-gateway-785c7cc549-thnm6\" (UID: \"9ef5e225-61d8-4ca8-9bc1-43e583ad71be\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.094187 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123055 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123134 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123191 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123780 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xg4q\" (UniqueName: \"kubernetes.io/projected/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-kube-api-access-6xg4q\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123833 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-rbac\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123882 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tls-secret\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123924 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tenants\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.123967 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.128552 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.130005 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-lokistack-gateway\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.131603 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.132321 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.133817 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-rbac\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.137706 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tls-secret\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.143206 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-tenants\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.149643 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xg4q\" (UniqueName: \"kubernetes.io/projected/92bc77c6-54c0-4ab0-8abf-71fef00ec66d-kube-api-access-6xg4q\") pod \"logging-loki-gateway-785c7cc549-fc8ss\" (UID: \"92bc77c6-54c0-4ab0-8abf-71fef00ec66d\") " pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.197465 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.220506 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.547126 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s"] Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.577503 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.585190 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.596308 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.596645 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.604902 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 15:50:03 crc kubenswrapper[4896]: W0126 15:50:03.606711 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod790beb3d_3eed_4fef_849d_84a13c17f4a7.slice/crio-2f2f300a217f65c1ed0c1cace4c31bb19684215b952a523b4bafb81f2a2f4dd2 WatchSource:0}: Error finding container 2f2f300a217f65c1ed0c1cace4c31bb19684215b952a523b4bafb81f2a2f4dd2: Status 404 returned error can't find the container with id 2f2f300a217f65c1ed0c1cace4c31bb19684215b952a523b4bafb81f2a2f4dd2 Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.628785 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.738087 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjmh\" (UniqueName: \"kubernetes.io/projected/5002bb81-4c92-43b5-93a3-e0986702b713-kube-api-access-mxjmh\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.738206 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.738373 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.739142 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740419 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740606 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-config\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740657 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740700 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740732 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.740829 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.749474 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.750228 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.750943 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.750982 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a409475107a540bc7c672a23db2becba516948b0cb5fe0c48892d77cb1286244/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.751422 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.792658 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.794083 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.810311 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.810535 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.842923 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.842980 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843016 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843050 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843100 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-config\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843136 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843173 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843205 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843274 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843300 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjmh\" (UniqueName: \"kubernetes.io/projected/5002bb81-4c92-43b5-93a3-e0986702b713-kube-api-access-mxjmh\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843359 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5m96\" (UniqueName: \"kubernetes.io/projected/94b020cc-3ced-46e2-89c9-1294e89989da-kube-api-access-n5m96\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843412 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-config\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.843450 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.845944 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-config\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:03 crc kubenswrapper[4896]: I0126 15:50:03.847721 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.016383 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.016485 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.016525 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5m96\" (UniqueName: \"kubernetes.io/projected/94b020cc-3ced-46e2-89c9-1294e89989da-kube-api-access-n5m96\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.016688 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.018122 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.018149 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjmh\" (UniqueName: \"kubernetes.io/projected/5002bb81-4c92-43b5-93a3-e0986702b713-kube-api-access-mxjmh\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.019258 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.019290 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" event={"ID":"790beb3d-3eed-4fef-849d-84a13c17f4a7","Type":"ContainerStarted","Data":"2f2f300a217f65c1ed0c1cace4c31bb19684215b952a523b4bafb81f2a2f4dd2"} Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.020958 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-config\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.021141 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.021290 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.021333 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.022088 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.027990 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5002bb81-4c92-43b5-93a3-e0986702b713-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.028243 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.028270 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e1e23ed0e2d4bf4e3f2be2da8aee09ec0cb2e100879d94429e21e8aab774e1a/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.031024 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.032438 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7543c543-6d8e-426c-b60c-873ef9c222b4\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.034748 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.052796 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.052885 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ccaea495117c7f931aff57c693c9bda6ce64ecc1dda50a3f0382df66870630a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.053725 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b020cc-3ced-46e2-89c9-1294e89989da-config\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.069283 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/94b020cc-3ced-46e2-89c9-1294e89989da-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.076491 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5m96\" (UniqueName: \"kubernetes.io/projected/94b020cc-3ced-46e2-89c9-1294e89989da-kube-api-access-n5m96\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.129529 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.129846 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.229521 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0645b0f-75d1-4a0f-b83e-1d09a112fc35\") pod \"logging-loki-compactor-0\" (UID: \"94b020cc-3ced-46e2-89c9-1294e89989da\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.231105 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.231133 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.232398 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.233057 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a4e7a1b-e72a-45a8-ae09-4d6263e9b29a\") pod \"logging-loki-ingester-0\" (UID: \"5002bb81-4c92-43b5-93a3-e0986702b713\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.240968 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlxp\" (UniqueName: \"kubernetes.io/projected/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-kube-api-access-8wlxp\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.241048 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.257278 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-fc8ss"] Jan 26 15:50:04 crc kubenswrapper[4896]: W0126 15:50:04.261656 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92bc77c6_54c0_4ab0_8abf_71fef00ec66d.slice/crio-65e838ab2c7f8d188bcd7538a775d6661478678edab3c62be9e7e768fb5d4a15 WatchSource:0}: Error finding container 65e838ab2c7f8d188bcd7538a775d6661478678edab3c62be9e7e768fb5d4a15: Status 404 returned error can't find the container with id 65e838ab2c7f8d188bcd7538a775d6661478678edab3c62be9e7e768fb5d4a15 Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.306932 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd"] Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.344122 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.344221 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlxp\" (UniqueName: \"kubernetes.io/projected/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-kube-api-access-8wlxp\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.344261 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.345072 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.345117 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.345204 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.345251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.345883 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.346451 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.348195 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.348236 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d735f5351fba00195eba9bc936e08b5136d75e5170b8fa2b126e25e7d75d3116/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.348259 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.348526 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.351231 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.362989 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlxp\" (UniqueName: \"kubernetes.io/projected/7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24-kube-api-access-8wlxp\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.378690 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5ba318b7-8ed5-448b-aa19-2f3b9e593fb6\") pod \"logging-loki-index-gateway-0\" (UID: \"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.412316 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.532166 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.575240 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-785c7cc549-thnm6"] Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.586804 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-lxv2v"] Jan 26 15:50:04 crc kubenswrapper[4896]: W0126 15:50:04.603795 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39d4db55_bf77_4948_a36b_4e0d4bf056e8.slice/crio-d51193e39252074de10575bafbc3e1be3c5a8415abc34755df5c5a7e3eac1bd9 WatchSource:0}: Error finding container d51193e39252074de10575bafbc3e1be3c5a8415abc34755df5c5a7e3eac1bd9: Status 404 returned error can't find the container with id d51193e39252074de10575bafbc3e1be3c5a8415abc34755df5c5a7e3eac1bd9 Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.617557 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.881952 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 15:50:04 crc kubenswrapper[4896]: I0126 15:50:04.925769 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 15:50:04 crc kubenswrapper[4896]: W0126 15:50:04.928648 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7efe8082_4b9b_49e6_a79c_0ca2e0f5bc24.slice/crio-b766059b8bacc44904e9d8ca12a5373ddcaca51542aa88b25c4c77d3d239d4b4 WatchSource:0}: Error finding container b766059b8bacc44904e9d8ca12a5373ddcaca51542aa88b25c4c77d3d239d4b4: Status 404 returned error can't find the container with id b766059b8bacc44904e9d8ca12a5373ddcaca51542aa88b25c4c77d3d239d4b4 Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.028032 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" event={"ID":"b0989bb6-640e-4e54-9dc7-940798b9847f","Type":"ContainerStarted","Data":"e7f35d7c9a47d063fb5a050be917415897945137497713359f669a75b2b401c5"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.028550 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.030361 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" event={"ID":"92bc77c6-54c0-4ab0-8abf-71fef00ec66d","Type":"ContainerStarted","Data":"65e838ab2c7f8d188bcd7538a775d6661478678edab3c62be9e7e768fb5d4a15"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.031470 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" event={"ID":"39d4db55-bf77-4948-a36b-4e0d4bf056e8","Type":"ContainerStarted","Data":"d51193e39252074de10575bafbc3e1be3c5a8415abc34755df5c5a7e3eac1bd9"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.033282 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24","Type":"ContainerStarted","Data":"b766059b8bacc44904e9d8ca12a5373ddcaca51542aa88b25c4c77d3d239d4b4"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.034500 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" event={"ID":"9ef5e225-61d8-4ca8-9bc1-43e583ad71be","Type":"ContainerStarted","Data":"426d3289859e5bb7575dc68b1223b2090fc7ba902f0f6c7e2bd35b163e9b086b"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.036605 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"94b020cc-3ced-46e2-89c9-1294e89989da","Type":"ContainerStarted","Data":"39638b499dc3ca5a908f65226818dfe6bb450c707d1211377274c63e8fb52e57"} Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.363487 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.368305 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.373238 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.472203 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.472258 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rnf\" (UniqueName: \"kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.472307 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.587063 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.587136 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rnf\" (UniqueName: \"kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.587184 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.587752 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.588508 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.615162 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rnf\" (UniqueName: \"kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf\") pod \"certified-operators-tpsp2\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:05 crc kubenswrapper[4896]: I0126 15:50:05.700556 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:06 crc kubenswrapper[4896]: I0126 15:50:06.113572 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5002bb81-4c92-43b5-93a3-e0986702b713","Type":"ContainerStarted","Data":"1e07c3d86f01e5c060707e7f5c0ae2c045296b5d29bf56c2d6004a42a11a61ce"} Jan 26 15:50:06 crc kubenswrapper[4896]: I0126 15:50:06.235893 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:07 crc kubenswrapper[4896]: I0126 15:50:07.121715 4896 generic.go:334] "Generic (PLEG): container finished" podID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerID="f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d" exitCode=0 Jan 26 15:50:07 crc kubenswrapper[4896]: I0126 15:50:07.122004 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerDied","Data":"f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d"} Jan 26 15:50:07 crc kubenswrapper[4896]: I0126 15:50:07.122032 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerStarted","Data":"e8d973697909ef7be40d4d76cc26dfff5e791c978aabf285d1bd58b5f3299c95"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.291421 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" event={"ID":"9ef5e225-61d8-4ca8-9bc1-43e583ad71be","Type":"ContainerStarted","Data":"053eab92e01ac83b99e6c2fdf65a6fe28bf002c3ce50cf9fbbba5745b3e7b968"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.294325 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5002bb81-4c92-43b5-93a3-e0986702b713","Type":"ContainerStarted","Data":"93469271a0ec78d161aebf8af900caef444a9393f9c8e8a58c1cc4cad437dec8"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.295423 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.300153 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" event={"ID":"790beb3d-3eed-4fef-849d-84a13c17f4a7","Type":"ContainerStarted","Data":"6af6056bbe1979c9fb76162c3f2ee9d797afaa7dc2be09a7faef296dee81ddba"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.300950 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.302795 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" event={"ID":"b0989bb6-640e-4e54-9dc7-940798b9847f","Type":"ContainerStarted","Data":"ee6e7889ddcdaf40bc9e140f49207b84710c054e724c34c46d9799c209388f6b"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.303247 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.304602 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" event={"ID":"92bc77c6-54c0-4ab0-8abf-71fef00ec66d","Type":"ContainerStarted","Data":"8b1eb1df6a6bf932f436495fb4f02e0519f8c0aafad679342e2855cfebf57713"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.305965 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24","Type":"ContainerStarted","Data":"f6171acfd4fbca36c3636a895064c6f11f490e498aca746a2584d26884aca7ad"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.306501 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.307804 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"94b020cc-3ced-46e2-89c9-1294e89989da","Type":"ContainerStarted","Data":"ffb58515317641eb33075048d6f42570b63dd1009cd60b8eae7c456f81b15734"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.308277 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.311225 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerStarted","Data":"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.320110 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.62194597 podStartE2EDuration="10.320091111s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:05.056942872 +0000 UTC m=+962.838823265" lastFinishedPulling="2026-01-26 15:50:11.755088013 +0000 UTC m=+969.536968406" observedRunningTime="2026-01-26 15:50:12.313758329 +0000 UTC m=+970.095638722" watchObservedRunningTime="2026-01-26 15:50:12.320091111 +0000 UTC m=+970.101971494" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.323544 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" event={"ID":"39d4db55-bf77-4948-a36b-4e0d4bf056e8","Type":"ContainerStarted","Data":"8560d910bc4a8c2be89aad28bf2cb9d9e6f2c47ca9967e6ab2a1d5d4a35b7aee"} Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.323767 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.356319 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" podStartSLOduration=2.895442403 podStartE2EDuration="10.356296276s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.312934521 +0000 UTC m=+962.094814914" lastFinishedPulling="2026-01-26 15:50:11.773788394 +0000 UTC m=+969.555668787" observedRunningTime="2026-01-26 15:50:12.337738608 +0000 UTC m=+970.119619001" watchObservedRunningTime="2026-01-26 15:50:12.356296276 +0000 UTC m=+970.138176669" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.360605 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" podStartSLOduration=2.24491545 podStartE2EDuration="10.360588719s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:03.63885118 +0000 UTC m=+961.420731563" lastFinishedPulling="2026-01-26 15:50:11.754524449 +0000 UTC m=+969.536404832" observedRunningTime="2026-01-26 15:50:12.353785865 +0000 UTC m=+970.135666258" watchObservedRunningTime="2026-01-26 15:50:12.360588719 +0000 UTC m=+970.142469122" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.376132 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.552965526 podStartE2EDuration="10.376112424s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.931376411 +0000 UTC m=+962.713256804" lastFinishedPulling="2026-01-26 15:50:11.754523299 +0000 UTC m=+969.536403702" observedRunningTime="2026-01-26 15:50:12.369767171 +0000 UTC m=+970.151647564" watchObservedRunningTime="2026-01-26 15:50:12.376112424 +0000 UTC m=+970.157992817" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.425017 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.575118609 podStartE2EDuration="10.424974443s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.912066064 +0000 UTC m=+962.693946457" lastFinishedPulling="2026-01-26 15:50:11.761921898 +0000 UTC m=+969.543802291" observedRunningTime="2026-01-26 15:50:12.422864173 +0000 UTC m=+970.204744576" watchObservedRunningTime="2026-01-26 15:50:12.424974443 +0000 UTC m=+970.206854846" Jan 26 15:50:12 crc kubenswrapper[4896]: I0126 15:50:12.464181 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" podStartSLOduration=3.40109711 podStartE2EDuration="10.464160509s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.606016187 +0000 UTC m=+962.387896580" lastFinishedPulling="2026-01-26 15:50:11.669079586 +0000 UTC m=+969.450959979" observedRunningTime="2026-01-26 15:50:12.461294 +0000 UTC m=+970.243174393" watchObservedRunningTime="2026-01-26 15:50:12.464160509 +0000 UTC m=+970.246040902" Jan 26 15:50:13 crc kubenswrapper[4896]: I0126 15:50:13.332770 4896 generic.go:334] "Generic (PLEG): container finished" podID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerID="99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d" exitCode=0 Jan 26 15:50:13 crc kubenswrapper[4896]: I0126 15:50:13.332927 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerDied","Data":"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d"} Jan 26 15:50:14 crc kubenswrapper[4896]: I0126 15:50:14.403880 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerStarted","Data":"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96"} Jan 26 15:50:14 crc kubenswrapper[4896]: I0126 15:50:14.434148 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tpsp2" podStartSLOduration=2.830553008 podStartE2EDuration="9.434123306s" podCreationTimestamp="2026-01-26 15:50:05 +0000 UTC" firstStartedPulling="2026-01-26 15:50:07.123473557 +0000 UTC m=+964.905353950" lastFinishedPulling="2026-01-26 15:50:13.727043855 +0000 UTC m=+971.508924248" observedRunningTime="2026-01-26 15:50:14.430105028 +0000 UTC m=+972.211985431" watchObservedRunningTime="2026-01-26 15:50:14.434123306 +0000 UTC m=+972.216003699" Jan 26 15:50:15 crc kubenswrapper[4896]: I0126 15:50:15.701224 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:15 crc kubenswrapper[4896]: I0126 15:50:15.701628 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.425619 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" event={"ID":"92bc77c6-54c0-4ab0-8abf-71fef00ec66d","Type":"ContainerStarted","Data":"72b8c66ccea370793abc0099ecfff236a025f49a9bf1a97f76b61c9c029ca256"} Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.425831 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.425879 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.437699 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.452922 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" podStartSLOduration=3.437499161 podStartE2EDuration="14.452903603s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.274379221 +0000 UTC m=+962.056259614" lastFinishedPulling="2026-01-26 15:50:15.289783663 +0000 UTC m=+973.071664056" observedRunningTime="2026-01-26 15:50:16.444263955 +0000 UTC m=+974.226144378" watchObservedRunningTime="2026-01-26 15:50:16.452903603 +0000 UTC m=+974.234783996" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.455199 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-785c7cc549-fc8ss" Jan 26 15:50:16 crc kubenswrapper[4896]: I0126 15:50:16.747301 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tpsp2" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="registry-server" probeResult="failure" output=< Jan 26 15:50:16 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:50:16 crc kubenswrapper[4896]: > Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.465789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" event={"ID":"9ef5e225-61d8-4ca8-9bc1-43e583ad71be","Type":"ContainerStarted","Data":"031d40bbd811a10495ca32bf91cd05e8aee46ae8e8bb3e4c1fde5c845864784e"} Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.467740 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.467763 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.480269 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.481045 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" Jan 26 15:50:22 crc kubenswrapper[4896]: I0126 15:50:22.512748 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" podStartSLOduration=2.99550872 podStartE2EDuration="20.512726161s" podCreationTimestamp="2026-01-26 15:50:02 +0000 UTC" firstStartedPulling="2026-01-26 15:50:04.583114543 +0000 UTC m=+962.364994936" lastFinishedPulling="2026-01-26 15:50:22.100331984 +0000 UTC m=+979.882212377" observedRunningTime="2026-01-26 15:50:22.489779287 +0000 UTC m=+980.271659690" watchObservedRunningTime="2026-01-26 15:50:22.512726161 +0000 UTC m=+980.294606564" Jan 26 15:50:25 crc kubenswrapper[4896]: I0126 15:50:25.747603 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:25 crc kubenswrapper[4896]: I0126 15:50:25.802101 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:25 crc kubenswrapper[4896]: I0126 15:50:25.984078 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:27 crc kubenswrapper[4896]: I0126 15:50:27.500572 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tpsp2" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="registry-server" containerID="cri-o://c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96" gracePeriod=2 Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.391313 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.504706 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content\") pod \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.504784 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities\") pod \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.504894 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rnf\" (UniqueName: \"kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf\") pod \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\" (UID: \"aab6f894-d91f-4200-9b58-3c0ddb9ca85b\") " Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.506159 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities" (OuterVolumeSpecName: "utilities") pod "aab6f894-d91f-4200-9b58-3c0ddb9ca85b" (UID: "aab6f894-d91f-4200-9b58-3c0ddb9ca85b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.509542 4896 generic.go:334] "Generic (PLEG): container finished" podID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerID="c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96" exitCode=0 Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.509606 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerDied","Data":"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96"} Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.509640 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpsp2" event={"ID":"aab6f894-d91f-4200-9b58-3c0ddb9ca85b","Type":"ContainerDied","Data":"e8d973697909ef7be40d4d76cc26dfff5e791c978aabf285d1bd58b5f3299c95"} Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.509660 4896 scope.go:117] "RemoveContainer" containerID="c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.509759 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpsp2" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.515158 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf" (OuterVolumeSpecName: "kube-api-access-m6rnf") pod "aab6f894-d91f-4200-9b58-3c0ddb9ca85b" (UID: "aab6f894-d91f-4200-9b58-3c0ddb9ca85b"). InnerVolumeSpecName "kube-api-access-m6rnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.557326 4896 scope.go:117] "RemoveContainer" containerID="99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.565252 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aab6f894-d91f-4200-9b58-3c0ddb9ca85b" (UID: "aab6f894-d91f-4200-9b58-3c0ddb9ca85b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.585796 4896 scope.go:117] "RemoveContainer" containerID="f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.603211 4896 scope.go:117] "RemoveContainer" containerID="c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96" Jan 26 15:50:28 crc kubenswrapper[4896]: E0126 15:50:28.603932 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96\": container with ID starting with c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96 not found: ID does not exist" containerID="c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.603985 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96"} err="failed to get container status \"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96\": rpc error: code = NotFound desc = could not find container \"c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96\": container with ID starting with c9411c7dd0b13c29ce8fb0ce4a1f9cee410ed24731f1af543a456947be6cbb96 not found: ID does not exist" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.604012 4896 scope.go:117] "RemoveContainer" containerID="99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d" Jan 26 15:50:28 crc kubenswrapper[4896]: E0126 15:50:28.604368 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d\": container with ID starting with 99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d not found: ID does not exist" containerID="99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.604423 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d"} err="failed to get container status \"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d\": rpc error: code = NotFound desc = could not find container \"99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d\": container with ID starting with 99b563cd9dc85ea14f768a52133222982ae720aa6c938f4ec5c8d068f351480d not found: ID does not exist" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.604456 4896 scope.go:117] "RemoveContainer" containerID="f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d" Jan 26 15:50:28 crc kubenswrapper[4896]: E0126 15:50:28.604783 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d\": container with ID starting with f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d not found: ID does not exist" containerID="f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.604811 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d"} err="failed to get container status \"f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d\": rpc error: code = NotFound desc = could not find container \"f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d\": container with ID starting with f02de049d962d7464101b3878d2ace83527d8da5120d451a0c0c78e1a0e22f3d not found: ID does not exist" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.606302 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rnf\" (UniqueName: \"kubernetes.io/projected/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-kube-api-access-m6rnf\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.606331 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.606344 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab6f894-d91f-4200-9b58-3c0ddb9ca85b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.830992 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:28 crc kubenswrapper[4896]: I0126 15:50:28.836565 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tpsp2"] Jan 26 15:50:30 crc kubenswrapper[4896]: I0126 15:50:30.768035 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" path="/var/lib/kubelet/pods/aab6f894-d91f-4200-9b58-3c0ddb9ca85b/volumes" Jan 26 15:50:32 crc kubenswrapper[4896]: I0126 15:50:32.817834 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" Jan 26 15:50:32 crc kubenswrapper[4896]: I0126 15:50:32.921185 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-lxv2v" Jan 26 15:50:33 crc kubenswrapper[4896]: I0126 15:50:33.102769 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-ds2pd" Jan 26 15:50:34 crc kubenswrapper[4896]: I0126 15:50:34.418081 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 26 15:50:34 crc kubenswrapper[4896]: I0126 15:50:34.551297 4896 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 26 15:50:34 crc kubenswrapper[4896]: I0126 15:50:34.551354 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5002bb81-4c92-43b5-93a3-e0986702b713" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:50:34 crc kubenswrapper[4896]: I0126 15:50:34.626201 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.831676 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:37 crc kubenswrapper[4896]: E0126 15:50:37.832251 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="extract-utilities" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.832271 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="extract-utilities" Jan 26 15:50:37 crc kubenswrapper[4896]: E0126 15:50:37.832287 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="registry-server" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.832301 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="registry-server" Jan 26 15:50:37 crc kubenswrapper[4896]: E0126 15:50:37.832310 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="extract-content" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.832318 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="extract-content" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.832487 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab6f894-d91f-4200-9b58-3c0ddb9ca85b" containerName="registry-server" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.833479 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.846195 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.975409 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.975979 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:37 crc kubenswrapper[4896]: I0126 15:50:37.976017 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94cf\" (UniqueName: \"kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.077281 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.077797 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.077944 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.077967 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94cf\" (UniqueName: \"kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.078477 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.100093 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94cf\" (UniqueName: \"kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf\") pod \"redhat-marketplace-xk7mn\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.157958 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:38 crc kubenswrapper[4896]: I0126 15:50:38.586702 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:39 crc kubenswrapper[4896]: I0126 15:50:39.591090 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3598e59-0ddc-471f-a11e-26194e22976f" containerID="c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b" exitCode=0 Jan 26 15:50:39 crc kubenswrapper[4896]: I0126 15:50:39.591152 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerDied","Data":"c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b"} Jan 26 15:50:39 crc kubenswrapper[4896]: I0126 15:50:39.591435 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerStarted","Data":"4a256653e3fa544cff91cd44dfa2e3a4cf4e092715ccd246cb59a37be94e179f"} Jan 26 15:50:41 crc kubenswrapper[4896]: I0126 15:50:41.613680 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3598e59-0ddc-471f-a11e-26194e22976f" containerID="acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0" exitCode=0 Jan 26 15:50:41 crc kubenswrapper[4896]: I0126 15:50:41.613795 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerDied","Data":"acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0"} Jan 26 15:50:42 crc kubenswrapper[4896]: I0126 15:50:42.627171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerStarted","Data":"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd"} Jan 26 15:50:42 crc kubenswrapper[4896]: I0126 15:50:42.644077 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xk7mn" podStartSLOduration=2.7781959 podStartE2EDuration="5.644051597s" podCreationTimestamp="2026-01-26 15:50:37 +0000 UTC" firstStartedPulling="2026-01-26 15:50:39.592672391 +0000 UTC m=+997.374552784" lastFinishedPulling="2026-01-26 15:50:42.458528088 +0000 UTC m=+1000.240408481" observedRunningTime="2026-01-26 15:50:42.642274924 +0000 UTC m=+1000.424155317" watchObservedRunningTime="2026-01-26 15:50:42.644051597 +0000 UTC m=+1000.425932000" Jan 26 15:50:44 crc kubenswrapper[4896]: I0126 15:50:44.538466 4896 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 26 15:50:44 crc kubenswrapper[4896]: I0126 15:50:44.538821 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5002bb81-4c92-43b5-93a3-e0986702b713" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.158493 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.159064 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.212326 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.735827 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.781240 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.814336 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:50:48 crc kubenswrapper[4896]: I0126 15:50:48.814395 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:50:50 crc kubenswrapper[4896]: I0126 15:50:50.710485 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xk7mn" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="registry-server" containerID="cri-o://69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd" gracePeriod=2 Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.689769 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.725182 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3598e59-0ddc-471f-a11e-26194e22976f" containerID="69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd" exitCode=0 Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.725754 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk7mn" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.725738 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerDied","Data":"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd"} Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.726274 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk7mn" event={"ID":"b3598e59-0ddc-471f-a11e-26194e22976f","Type":"ContainerDied","Data":"4a256653e3fa544cff91cd44dfa2e3a4cf4e092715ccd246cb59a37be94e179f"} Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.726311 4896 scope.go:117] "RemoveContainer" containerID="69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.748216 4896 scope.go:117] "RemoveContainer" containerID="acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.767783 4896 scope.go:117] "RemoveContainer" containerID="c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.791479 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities\") pod \"b3598e59-0ddc-471f-a11e-26194e22976f\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.791627 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l94cf\" (UniqueName: \"kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf\") pod \"b3598e59-0ddc-471f-a11e-26194e22976f\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.791679 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content\") pod \"b3598e59-0ddc-471f-a11e-26194e22976f\" (UID: \"b3598e59-0ddc-471f-a11e-26194e22976f\") " Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.793118 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities" (OuterVolumeSpecName: "utilities") pod "b3598e59-0ddc-471f-a11e-26194e22976f" (UID: "b3598e59-0ddc-471f-a11e-26194e22976f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.797837 4896 scope.go:117] "RemoveContainer" containerID="69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd" Jan 26 15:50:51 crc kubenswrapper[4896]: E0126 15:50:51.798431 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd\": container with ID starting with 69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd not found: ID does not exist" containerID="69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.798493 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd"} err="failed to get container status \"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd\": rpc error: code = NotFound desc = could not find container \"69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd\": container with ID starting with 69d23dab7013c187f1cc30fa5d353cb0ac34728d3584ccbecd4c64bd0c196bdd not found: ID does not exist" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.798520 4896 scope.go:117] "RemoveContainer" containerID="acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0" Jan 26 15:50:51 crc kubenswrapper[4896]: E0126 15:50:51.799412 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0\": container with ID starting with acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0 not found: ID does not exist" containerID="acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.799458 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0"} err="failed to get container status \"acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0\": rpc error: code = NotFound desc = could not find container \"acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0\": container with ID starting with acdcee0618fd090e8ab23d81784cd3f9d412fb26dc919cd296173702a8fcd1d0 not found: ID does not exist" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.799473 4896 scope.go:117] "RemoveContainer" containerID="c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b" Jan 26 15:50:51 crc kubenswrapper[4896]: E0126 15:50:51.800026 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b\": container with ID starting with c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b not found: ID does not exist" containerID="c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.800055 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b"} err="failed to get container status \"c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b\": rpc error: code = NotFound desc = could not find container \"c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b\": container with ID starting with c4ff38f1a2691bae21451294c4b4f4481b7840819d18eac4559010fc7f0cfc5b not found: ID does not exist" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.800208 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf" (OuterVolumeSpecName: "kube-api-access-l94cf") pod "b3598e59-0ddc-471f-a11e-26194e22976f" (UID: "b3598e59-0ddc-471f-a11e-26194e22976f"). InnerVolumeSpecName "kube-api-access-l94cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.817052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3598e59-0ddc-471f-a11e-26194e22976f" (UID: "b3598e59-0ddc-471f-a11e-26194e22976f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.892967 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.893024 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l94cf\" (UniqueName: \"kubernetes.io/projected/b3598e59-0ddc-471f-a11e-26194e22976f-kube-api-access-l94cf\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:51 crc kubenswrapper[4896]: I0126 15:50:51.893044 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3598e59-0ddc-471f-a11e-26194e22976f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:50:52 crc kubenswrapper[4896]: I0126 15:50:52.063784 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:52 crc kubenswrapper[4896]: I0126 15:50:52.069277 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk7mn"] Jan 26 15:50:52 crc kubenswrapper[4896]: I0126 15:50:52.787884 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" path="/var/lib/kubelet/pods/b3598e59-0ddc-471f-a11e-26194e22976f/volumes" Jan 26 15:50:54 crc kubenswrapper[4896]: I0126 15:50:54.538066 4896 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 26 15:50:54 crc kubenswrapper[4896]: I0126 15:50:54.538166 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5002bb81-4c92-43b5-93a3-e0986702b713" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 15:51:04 crc kubenswrapper[4896]: I0126 15:51:04.539289 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 26 15:51:18 crc kubenswrapper[4896]: I0126 15:51:18.813394 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:51:18 crc kubenswrapper[4896]: I0126 15:51:18.814056 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.736668 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pqg96"] Jan 26 15:51:22 crc kubenswrapper[4896]: E0126 15:51:22.737665 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="extract-utilities" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.737682 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="extract-utilities" Jan 26 15:51:22 crc kubenswrapper[4896]: E0126 15:51:22.737695 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="extract-content" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.737705 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="extract-content" Jan 26 15:51:22 crc kubenswrapper[4896]: E0126 15:51:22.737729 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="registry-server" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.737736 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="registry-server" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.737891 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3598e59-0ddc-471f-a11e-26194e22976f" containerName="registry-server" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.738522 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.744674 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.744814 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.744957 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.745025 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.745057 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qktjm" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.762235 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.773739 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pqg96"] Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.856249 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pqg96"] Jan 26 15:51:22 crc kubenswrapper[4896]: E0126 15:51:22.857083 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-dghss metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-pqg96" podUID="7c7352ab-6ec7-499e-8528-119488b0eaff" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.929747 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.929814 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.929871 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.929942 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.929973 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930105 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930159 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930184 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghss\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930418 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930540 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:22 crc kubenswrapper[4896]: I0126 15:51:22.930589 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.007700 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.017217 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043469 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043546 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043654 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043699 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043726 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.043770 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: E0126 15:51:23.043910 4896 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 26 15:51:23 crc kubenswrapper[4896]: E0126 15:51:23.044195 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics podName:7c7352ab-6ec7-499e-8528-119488b0eaff nodeName:}" failed. No retries permitted until 2026-01-26 15:51:23.54394916 +0000 UTC m=+1041.325829553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics") pod "collector-pqg96" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff") : secret "collector-metrics" not found Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.044716 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.044279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.046341 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.046394 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.046421 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghss\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.046615 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.045954 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.046944 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.048469 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.050553 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.056995 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.060322 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.063469 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.064510 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghss\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.068592 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.147872 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.147936 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148042 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148117 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148172 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148192 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148209 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghss\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148242 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148263 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148278 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148517 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148571 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir" (OuterVolumeSpecName: "datadir") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.148597 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config" (OuterVolumeSpecName: "config") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.149737 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.149993 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.152495 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.152880 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token" (OuterVolumeSpecName: "collector-token") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.153748 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss" (OuterVolumeSpecName: "kube-api-access-dghss") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "kube-api-access-dghss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.153882 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp" (OuterVolumeSpecName: "tmp") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.154525 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token" (OuterVolumeSpecName: "sa-token") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.250109 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.250565 4896 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.250776 4896 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7352ab-6ec7-499e-8528-119488b0eaff-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.250903 4896 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251023 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251138 4896 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c7352ab-6ec7-499e-8528-119488b0eaff-datadir\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251297 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dghss\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-kube-api-access-dghss\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251393 4896 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c7352ab-6ec7-499e-8528-119488b0eaff-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251497 4896 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-collector-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.251702 4896 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c7352ab-6ec7-499e-8528-119488b0eaff-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.556763 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.563442 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") pod \"collector-pqg96\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " pod="openshift-logging/collector-pqg96" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.760441 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") pod \"7c7352ab-6ec7-499e-8528-119488b0eaff\" (UID: \"7c7352ab-6ec7-499e-8528-119488b0eaff\") " Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.763072 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics" (OuterVolumeSpecName: "metrics") pod "7c7352ab-6ec7-499e-8528-119488b0eaff" (UID: "7c7352ab-6ec7-499e-8528-119488b0eaff"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:51:23 crc kubenswrapper[4896]: I0126 15:51:23.862146 4896 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c7352ab-6ec7-499e-8528-119488b0eaff-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.014115 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqg96" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.065086 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pqg96"] Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.082677 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-pqg96"] Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.092389 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-czl2w"] Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.093453 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.097649 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.097846 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.097927 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.097850 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qktjm" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.098084 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.101026 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-czl2w"] Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.107303 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268339 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-syslog-receiver\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268395 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmskf\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-kube-api-access-fmskf\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268431 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config-openshift-service-cacrt\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268500 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268534 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-entrypoint\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268620 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-trusted-ca\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268846 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-sa-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.268956 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-datadir\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.269021 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-metrics\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.269082 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-tmp\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.269106 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370635 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-datadir\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370692 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-metrics\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370753 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-tmp\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370760 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-datadir\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370786 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370836 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-syslog-receiver\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370859 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmskf\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-kube-api-access-fmskf\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370886 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config-openshift-service-cacrt\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370913 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370939 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-entrypoint\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370975 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-trusted-ca\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.370996 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-sa-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.372285 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config-openshift-service-cacrt\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.372515 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-config\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.372549 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-trusted-ca\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.372555 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-entrypoint\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.376035 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-syslog-receiver\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.376396 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-metrics\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.376531 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-collector-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.377769 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-tmp\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.389243 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmskf\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-kube-api-access-fmskf\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.389962 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/068fdddb-b48c-48f6-ab7a-a9e1d473aaa5-sa-token\") pod \"collector-czl2w\" (UID: \"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5\") " pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.412370 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-czl2w" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.767414 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7352ab-6ec7-499e-8528-119488b0eaff" path="/var/lib/kubelet/pods/7c7352ab-6ec7-499e-8528-119488b0eaff/volumes" Jan 26 15:51:24 crc kubenswrapper[4896]: I0126 15:51:24.875667 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-czl2w"] Jan 26 15:51:25 crc kubenswrapper[4896]: I0126 15:51:25.021274 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-czl2w" event={"ID":"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5","Type":"ContainerStarted","Data":"b046444a70fa06102c916dc16e136a15a0c7fbf1ac3fa2d7341e0a6a8b19d19c"} Jan 26 15:51:35 crc kubenswrapper[4896]: I0126 15:51:35.366315 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-czl2w" event={"ID":"068fdddb-b48c-48f6-ab7a-a9e1d473aaa5","Type":"ContainerStarted","Data":"30ba4962c7a777b1634bd7d3fa425a41a51eb1de666fcf23f6a115ac0d88ce75"} Jan 26 15:51:35 crc kubenswrapper[4896]: I0126 15:51:35.401096 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-czl2w" podStartSLOduration=1.516517414 podStartE2EDuration="11.401070979s" podCreationTimestamp="2026-01-26 15:51:24 +0000 UTC" firstStartedPulling="2026-01-26 15:51:24.882597342 +0000 UTC m=+1042.664477735" lastFinishedPulling="2026-01-26 15:51:34.767150917 +0000 UTC m=+1052.549031300" observedRunningTime="2026-01-26 15:51:35.392042979 +0000 UTC m=+1053.173923402" watchObservedRunningTime="2026-01-26 15:51:35.401070979 +0000 UTC m=+1053.182951372" Jan 26 15:51:48 crc kubenswrapper[4896]: I0126 15:51:48.813996 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:51:48 crc kubenswrapper[4896]: I0126 15:51:48.814703 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:51:48 crc kubenswrapper[4896]: I0126 15:51:48.814765 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:51:48 crc kubenswrapper[4896]: I0126 15:51:48.815653 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:51:48 crc kubenswrapper[4896]: I0126 15:51:48.815723 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f" gracePeriod=600 Jan 26 15:51:49 crc kubenswrapper[4896]: I0126 15:51:49.482227 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f" exitCode=0 Jan 26 15:51:49 crc kubenswrapper[4896]: I0126 15:51:49.483622 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f"} Jan 26 15:51:49 crc kubenswrapper[4896]: I0126 15:51:49.483696 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529"} Jan 26 15:51:49 crc kubenswrapper[4896]: I0126 15:51:49.483716 4896 scope.go:117] "RemoveContainer" containerID="d035dac015ae616fca26b6fccf99abfd2065d00fccd1bbdf0c5140ab65f83775" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.639705 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2"] Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.642122 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.643990 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.652259 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2"] Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.757984 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbth\" (UniqueName: \"kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.758188 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.758255 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.860842 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbth\" (UniqueName: \"kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.860955 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.860990 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.862027 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.862145 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.880503 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbth\" (UniqueName: \"kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:08 crc kubenswrapper[4896]: I0126 15:52:08.962820 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:09 crc kubenswrapper[4896]: I0126 15:52:09.309132 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2"] Jan 26 15:52:09 crc kubenswrapper[4896]: I0126 15:52:09.642059 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerStarted","Data":"faf6e5452e6889f59a5aa499b28f6c253b840d5cece65c079480e3ddc791fb7c"} Jan 26 15:52:10 crc kubenswrapper[4896]: I0126 15:52:10.650877 4896 generic.go:334] "Generic (PLEG): container finished" podID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerID="0c0bcf5cb98dbf867b0b00573381cbccfc6d99d4015babeb5941c938c61ec64f" exitCode=0 Jan 26 15:52:10 crc kubenswrapper[4896]: I0126 15:52:10.650925 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerDied","Data":"0c0bcf5cb98dbf867b0b00573381cbccfc6d99d4015babeb5941c938c61ec64f"} Jan 26 15:52:10 crc kubenswrapper[4896]: I0126 15:52:10.653015 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:52:12 crc kubenswrapper[4896]: I0126 15:52:12.666868 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerStarted","Data":"f01e8d0405aa4f78d28b77a56989dce6a5938fc7f05a792fa8e2ac09b872b1b4"} Jan 26 15:52:13 crc kubenswrapper[4896]: I0126 15:52:13.678278 4896 generic.go:334] "Generic (PLEG): container finished" podID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerID="f01e8d0405aa4f78d28b77a56989dce6a5938fc7f05a792fa8e2ac09b872b1b4" exitCode=0 Jan 26 15:52:13 crc kubenswrapper[4896]: I0126 15:52:13.678435 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerDied","Data":"f01e8d0405aa4f78d28b77a56989dce6a5938fc7f05a792fa8e2ac09b872b1b4"} Jan 26 15:52:14 crc kubenswrapper[4896]: I0126 15:52:14.690740 4896 generic.go:334] "Generic (PLEG): container finished" podID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerID="6a271bf9cb8480a05383184c4c10b639f162980dd1351cac27a6703bcd4aefbb" exitCode=0 Jan 26 15:52:14 crc kubenswrapper[4896]: I0126 15:52:14.690844 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerDied","Data":"6a271bf9cb8480a05383184c4c10b639f162980dd1351cac27a6703bcd4aefbb"} Jan 26 15:52:15 crc kubenswrapper[4896]: I0126 15:52:15.945105 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.145352 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsbth\" (UniqueName: \"kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth\") pod \"485d26e3-9bf1-4f92-92be-531b0ce1234e\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.145480 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util\") pod \"485d26e3-9bf1-4f92-92be-531b0ce1234e\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.145513 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle\") pod \"485d26e3-9bf1-4f92-92be-531b0ce1234e\" (UID: \"485d26e3-9bf1-4f92-92be-531b0ce1234e\") " Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.146189 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle" (OuterVolumeSpecName: "bundle") pod "485d26e3-9bf1-4f92-92be-531b0ce1234e" (UID: "485d26e3-9bf1-4f92-92be-531b0ce1234e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.151198 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth" (OuterVolumeSpecName: "kube-api-access-bsbth") pod "485d26e3-9bf1-4f92-92be-531b0ce1234e" (UID: "485d26e3-9bf1-4f92-92be-531b0ce1234e"). InnerVolumeSpecName "kube-api-access-bsbth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.157081 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util" (OuterVolumeSpecName: "util") pod "485d26e3-9bf1-4f92-92be-531b0ce1234e" (UID: "485d26e3-9bf1-4f92-92be-531b0ce1234e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.247726 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsbth\" (UniqueName: \"kubernetes.io/projected/485d26e3-9bf1-4f92-92be-531b0ce1234e-kube-api-access-bsbth\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.247766 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.247775 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/485d26e3-9bf1-4f92-92be-531b0ce1234e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.707060 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" event={"ID":"485d26e3-9bf1-4f92-92be-531b0ce1234e","Type":"ContainerDied","Data":"faf6e5452e6889f59a5aa499b28f6c253b840d5cece65c079480e3ddc791fb7c"} Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.707121 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faf6e5452e6889f59a5aa499b28f6c253b840d5cece65c079480e3ddc791fb7c" Jan 26 15:52:16 crc kubenswrapper[4896]: I0126 15:52:16.707185 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.892287 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gw2nc"] Jan 26 15:52:33 crc kubenswrapper[4896]: E0126 15:52:33.894184 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="util" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.894288 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="util" Jan 26 15:52:33 crc kubenswrapper[4896]: E0126 15:52:33.894366 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="extract" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.894425 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="extract" Jan 26 15:52:33 crc kubenswrapper[4896]: E0126 15:52:33.894505 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="pull" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.895424 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="pull" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.895724 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="485d26e3-9bf1-4f92-92be-531b0ce1234e" containerName="extract" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.896591 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.899371 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.899867 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kgrlm" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.900241 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 15:52:33 crc kubenswrapper[4896]: I0126 15:52:33.909043 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gw2nc"] Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.049857 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zg6\" (UniqueName: \"kubernetes.io/projected/f086c2d2-aa04-4a67-a6ee-0156173683f9-kube-api-access-b5zg6\") pod \"nmstate-operator-646758c888-gw2nc\" (UID: \"f086c2d2-aa04-4a67-a6ee-0156173683f9\") " pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.151293 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5zg6\" (UniqueName: \"kubernetes.io/projected/f086c2d2-aa04-4a67-a6ee-0156173683f9-kube-api-access-b5zg6\") pod \"nmstate-operator-646758c888-gw2nc\" (UID: \"f086c2d2-aa04-4a67-a6ee-0156173683f9\") " pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.177670 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5zg6\" (UniqueName: \"kubernetes.io/projected/f086c2d2-aa04-4a67-a6ee-0156173683f9-kube-api-access-b5zg6\") pod \"nmstate-operator-646758c888-gw2nc\" (UID: \"f086c2d2-aa04-4a67-a6ee-0156173683f9\") " pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.216416 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.823405 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gw2nc"] Jan 26 15:52:34 crc kubenswrapper[4896]: I0126 15:52:34.878187 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" event={"ID":"f086c2d2-aa04-4a67-a6ee-0156173683f9","Type":"ContainerStarted","Data":"58828b0a1e86cd4dd6d916a8d1d53fce8cd2d574649baab2e8b9ec577f1dfd8a"} Jan 26 15:52:38 crc kubenswrapper[4896]: I0126 15:52:38.028384 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" event={"ID":"f086c2d2-aa04-4a67-a6ee-0156173683f9","Type":"ContainerStarted","Data":"cc9b1369962486d21e570b1d6db0688dbebbbc3b16bc3342cecade36fc80cb14"} Jan 26 15:52:38 crc kubenswrapper[4896]: I0126 15:52:38.044733 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-gw2nc" podStartSLOduration=2.120510794 podStartE2EDuration="5.044716797s" podCreationTimestamp="2026-01-26 15:52:33 +0000 UTC" firstStartedPulling="2026-01-26 15:52:34.846221083 +0000 UTC m=+1112.628101476" lastFinishedPulling="2026-01-26 15:52:37.770427086 +0000 UTC m=+1115.552307479" observedRunningTime="2026-01-26 15:52:38.0444407 +0000 UTC m=+1115.826321103" watchObservedRunningTime="2026-01-26 15:52:38.044716797 +0000 UTC m=+1115.826597190" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.354310 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zj6jl"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.355919 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.358403 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fqkn9" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.375928 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.377232 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.379721 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.386906 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zj6jl"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.404185 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.461848 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gb4lb"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.463366 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.536074 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqw8\" (UniqueName: \"kubernetes.io/projected/f40b3348-54e4-43f5-9036-cbb48e93b039-kube-api-access-hlqw8\") pod \"nmstate-metrics-54757c584b-zj6jl\" (UID: \"f40b3348-54e4-43f5-9036-cbb48e93b039\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.536213 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnzm2\" (UniqueName: \"kubernetes.io/projected/00f1e33c-4322-42be-b120-18e0bbad3318-kube-api-access-gnzm2\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.536286 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00f1e33c-4322-42be-b120-18e0bbad3318-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.570523 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.571698 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.575441 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.576056 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.576152 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cw6vx" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.595988 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638612 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnzm2\" (UniqueName: \"kubernetes.io/projected/00f1e33c-4322-42be-b120-18e0bbad3318-kube-api-access-gnzm2\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638715 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00f1e33c-4322-42be-b120-18e0bbad3318-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638766 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-ovs-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638826 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-nmstate-lock\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638852 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlqw8\" (UniqueName: \"kubernetes.io/projected/f40b3348-54e4-43f5-9036-cbb48e93b039-kube-api-access-hlqw8\") pod \"nmstate-metrics-54757c584b-zj6jl\" (UID: \"f40b3348-54e4-43f5-9036-cbb48e93b039\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638932 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-dbus-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.638970 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldf82\" (UniqueName: \"kubernetes.io/projected/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-kube-api-access-ldf82\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.647504 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00f1e33c-4322-42be-b120-18e0bbad3318-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.666030 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlqw8\" (UniqueName: \"kubernetes.io/projected/f40b3348-54e4-43f5-9036-cbb48e93b039-kube-api-access-hlqw8\") pod \"nmstate-metrics-54757c584b-zj6jl\" (UID: \"f40b3348-54e4-43f5-9036-cbb48e93b039\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.673010 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnzm2\" (UniqueName: \"kubernetes.io/projected/00f1e33c-4322-42be-b120-18e0bbad3318-kube-api-access-gnzm2\") pod \"nmstate-webhook-8474b5b9d8-ktsph\" (UID: \"00f1e33c-4322-42be-b120-18e0bbad3318\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.674848 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.692944 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741029 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-nmstate-lock\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741100 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfq2l\" (UniqueName: \"kubernetes.io/projected/0613cf2d-b75b-49d7-b022-7783032c5977-kube-api-access-nfq2l\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741137 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-dbus-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741166 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldf82\" (UniqueName: \"kubernetes.io/projected/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-kube-api-access-ldf82\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741211 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0613cf2d-b75b-49d7-b022-7783032c5977-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741266 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-ovs-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741416 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-ovs-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741450 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-nmstate-lock\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.741671 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-dbus-socket\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.768697 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldf82\" (UniqueName: \"kubernetes.io/projected/c26d8ef0-b0d7-4095-8dd2-94aa365eb295-kube-api-access-ldf82\") pod \"nmstate-handler-gb4lb\" (UID: \"c26d8ef0-b0d7-4095-8dd2-94aa365eb295\") " pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.788908 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.827662 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.828572 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.845377 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.845616 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.845721 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0613cf2d-b75b-49d7-b022-7783032c5977-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.845860 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfq2l\" (UniqueName: \"kubernetes.io/projected/0613cf2d-b75b-49d7-b022-7783032c5977-kube-api-access-nfq2l\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: E0126 15:52:39.847953 4896 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 15:52:39 crc kubenswrapper[4896]: E0126 15:52:39.848052 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert podName:0613cf2d-b75b-49d7-b022-7783032c5977 nodeName:}" failed. No retries permitted until 2026-01-26 15:52:40.348023481 +0000 UTC m=+1118.129903874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-jlhg6" (UID: "0613cf2d-b75b-49d7-b022-7783032c5977") : secret "plugin-serving-cert" not found Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.856845 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0613cf2d-b75b-49d7-b022-7783032c5977-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.919928 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfq2l\" (UniqueName: \"kubernetes.io/projected/0613cf2d-b75b-49d7-b022-7783032c5977-kube-api-access-nfq2l\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948251 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948315 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948427 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948469 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlz2l\" (UniqueName: \"kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948508 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948547 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:39 crc kubenswrapper[4896]: I0126 15:52:39.948641 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.050245 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gb4lb" event={"ID":"c26d8ef0-b0d7-4095-8dd2-94aa365eb295","Type":"ContainerStarted","Data":"3b92c8d9535c24043318b79847e3b67eb51a9e47d2c59fb52dbfe0ae17b6f7eb"} Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057604 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057668 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057781 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057815 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlz2l\" (UniqueName: \"kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057841 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057870 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.057915 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.060032 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.060358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.064867 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.076147 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.076160 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.079396 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.087258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlz2l\" (UniqueName: \"kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l\") pod \"console-5cfccffc99-hcln8\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.205036 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.362412 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.367955 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0613cf2d-b75b-49d7-b022-7783032c5977-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jlhg6\" (UID: \"0613cf2d-b75b-49d7-b022-7783032c5977\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.435102 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zj6jl"] Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.494522 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.494720 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph"] Jan 26 15:52:40 crc kubenswrapper[4896]: W0126 15:52:40.856610 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87cc614_885a_440d_8310_bd22b599a383.slice/crio-a33936c88a2a578362a964e1064490121b7ee517ebfed1c72251932336b2fea1 WatchSource:0}: Error finding container a33936c88a2a578362a964e1064490121b7ee517ebfed1c72251932336b2fea1: Status 404 returned error can't find the container with id a33936c88a2a578362a964e1064490121b7ee517ebfed1c72251932336b2fea1 Jan 26 15:52:40 crc kubenswrapper[4896]: I0126 15:52:40.861228 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:52:41 crc kubenswrapper[4896]: I0126 15:52:41.059857 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfccffc99-hcln8" event={"ID":"d87cc614-885a-440d-8310-bd22b599a383","Type":"ContainerStarted","Data":"a33936c88a2a578362a964e1064490121b7ee517ebfed1c72251932336b2fea1"} Jan 26 15:52:41 crc kubenswrapper[4896]: I0126 15:52:41.061415 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" event={"ID":"00f1e33c-4322-42be-b120-18e0bbad3318","Type":"ContainerStarted","Data":"0223eb388134712c1d33069fdd8f6ef5142b7b6abdae41510c81bd4361cb994d"} Jan 26 15:52:41 crc kubenswrapper[4896]: I0126 15:52:41.062351 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" event={"ID":"f40b3348-54e4-43f5-9036-cbb48e93b039","Type":"ContainerStarted","Data":"01257a59576909968f983829da5a940bc6725c875118f7519e90a17e9edf2e18"} Jan 26 15:52:41 crc kubenswrapper[4896]: I0126 15:52:41.095350 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6"] Jan 26 15:52:41 crc kubenswrapper[4896]: W0126 15:52:41.100714 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0613cf2d_b75b_49d7_b022_7783032c5977.slice/crio-c283ef7744b77af75463e377d2a706f76662d8bf3503f3cf0d0d8dc134dd9614 WatchSource:0}: Error finding container c283ef7744b77af75463e377d2a706f76662d8bf3503f3cf0d0d8dc134dd9614: Status 404 returned error can't find the container with id c283ef7744b77af75463e377d2a706f76662d8bf3503f3cf0d0d8dc134dd9614 Jan 26 15:52:42 crc kubenswrapper[4896]: I0126 15:52:42.095695 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfccffc99-hcln8" event={"ID":"d87cc614-885a-440d-8310-bd22b599a383","Type":"ContainerStarted","Data":"5ea2589e4686a46d835d293ed5cfd303adf11a3abb4e1a254518f7daa1744cb3"} Jan 26 15:52:42 crc kubenswrapper[4896]: I0126 15:52:42.097028 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" event={"ID":"0613cf2d-b75b-49d7-b022-7783032c5977","Type":"ContainerStarted","Data":"c283ef7744b77af75463e377d2a706f76662d8bf3503f3cf0d0d8dc134dd9614"} Jan 26 15:52:42 crc kubenswrapper[4896]: I0126 15:52:42.788051 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cfccffc99-hcln8" podStartSLOduration=3.788029244 podStartE2EDuration="3.788029244s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:52:42.121546532 +0000 UTC m=+1119.903426935" watchObservedRunningTime="2026-01-26 15:52:42.788029244 +0000 UTC m=+1120.569909637" Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.120803 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gb4lb" event={"ID":"c26d8ef0-b0d7-4095-8dd2-94aa365eb295","Type":"ContainerStarted","Data":"da2a57b7cf6f2bfa835244306a52cd971f6cbf47e6d067b955a55784b23b7aae"} Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.121301 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.122362 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" event={"ID":"00f1e33c-4322-42be-b120-18e0bbad3318","Type":"ContainerStarted","Data":"c1fcfecc5a7d8d73e7317ac3beaa9384aef95d157a8ad3f4a072870be5eb4737"} Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.122444 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.124287 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" event={"ID":"f40b3348-54e4-43f5-9036-cbb48e93b039","Type":"ContainerStarted","Data":"37e83d87cccef4a31bd388a208a9ae043329d24fad8991eb69fe4f03377a37d0"} Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.141337 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gb4lb" podStartSLOduration=2.037234246 podStartE2EDuration="6.141321823s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="2026-01-26 15:52:39.939661002 +0000 UTC m=+1117.721541395" lastFinishedPulling="2026-01-26 15:52:44.043748579 +0000 UTC m=+1121.825628972" observedRunningTime="2026-01-26 15:52:45.138552636 +0000 UTC m=+1122.920433029" watchObservedRunningTime="2026-01-26 15:52:45.141321823 +0000 UTC m=+1122.923202216" Jan 26 15:52:45 crc kubenswrapper[4896]: I0126 15:52:45.170218 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" podStartSLOduration=2.635475596 podStartE2EDuration="6.170196914s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="2026-01-26 15:52:40.509188235 +0000 UTC m=+1118.291068628" lastFinishedPulling="2026-01-26 15:52:44.043909553 +0000 UTC m=+1121.825789946" observedRunningTime="2026-01-26 15:52:45.156459895 +0000 UTC m=+1122.938340288" watchObservedRunningTime="2026-01-26 15:52:45.170196914 +0000 UTC m=+1122.952077327" Jan 26 15:52:46 crc kubenswrapper[4896]: I0126 15:52:46.140668 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" event={"ID":"0613cf2d-b75b-49d7-b022-7783032c5977","Type":"ContainerStarted","Data":"7fccad89f465f907710393fb3848d9a0931bf3644311c3f2b0463bacc19ac5d2"} Jan 26 15:52:46 crc kubenswrapper[4896]: I0126 15:52:46.172321 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jlhg6" podStartSLOduration=2.990109099 podStartE2EDuration="7.172285944s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="2026-01-26 15:52:41.102815165 +0000 UTC m=+1118.884695558" lastFinishedPulling="2026-01-26 15:52:45.28499201 +0000 UTC m=+1123.066872403" observedRunningTime="2026-01-26 15:52:46.158232797 +0000 UTC m=+1123.940113190" watchObservedRunningTime="2026-01-26 15:52:46.172285944 +0000 UTC m=+1123.954166337" Jan 26 15:52:47 crc kubenswrapper[4896]: I0126 15:52:47.149789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" event={"ID":"f40b3348-54e4-43f5-9036-cbb48e93b039","Type":"ContainerStarted","Data":"3b17ded04e361aed8bd2f001d452c25d77e515a9e3abef8dacd0caa5134eba02"} Jan 26 15:52:47 crc kubenswrapper[4896]: I0126 15:52:47.170471 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-zj6jl" podStartSLOduration=1.7125577 podStartE2EDuration="8.170452149s" podCreationTimestamp="2026-01-26 15:52:39 +0000 UTC" firstStartedPulling="2026-01-26 15:52:40.447908799 +0000 UTC m=+1118.229789192" lastFinishedPulling="2026-01-26 15:52:46.905803238 +0000 UTC m=+1124.687683641" observedRunningTime="2026-01-26 15:52:47.169871385 +0000 UTC m=+1124.951751788" watchObservedRunningTime="2026-01-26 15:52:47.170452149 +0000 UTC m=+1124.952332542" Jan 26 15:52:49 crc kubenswrapper[4896]: I0126 15:52:49.879507 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gb4lb" Jan 26 15:52:50 crc kubenswrapper[4896]: I0126 15:52:50.206223 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:50 crc kubenswrapper[4896]: I0126 15:52:50.206277 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:50 crc kubenswrapper[4896]: I0126 15:52:50.212460 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:51 crc kubenswrapper[4896]: I0126 15:52:51.181754 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:52:51 crc kubenswrapper[4896]: I0126 15:52:51.241368 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:52:59 crc kubenswrapper[4896]: I0126 15:52:59.698784 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ktsph" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.288706 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-544466ff54-8zfqf" podUID="f64ac1ba-c007-4df3-8952-e65b53e18d91" containerName="console" containerID="cri-o://045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06" gracePeriod=15 Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.761993 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544466ff54-8zfqf_f64ac1ba-c007-4df3-8952-e65b53e18d91/console/0.log" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.762746 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.851952 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852024 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l9z7\" (UniqueName: \"kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852050 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852084 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852105 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852184 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.852205 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca\") pod \"f64ac1ba-c007-4df3-8952-e65b53e18d91\" (UID: \"f64ac1ba-c007-4df3-8952-e65b53e18d91\") " Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.854335 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.854557 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.855109 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca" (OuterVolumeSpecName: "service-ca") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.855153 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config" (OuterVolumeSpecName: "console-config") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.859416 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.859569 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7" (OuterVolumeSpecName: "kube-api-access-5l9z7") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "kube-api-access-5l9z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.864991 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f64ac1ba-c007-4df3-8952-e65b53e18d91" (UID: "f64ac1ba-c007-4df3-8952-e65b53e18d91"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954132 4896 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954176 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l9z7\" (UniqueName: \"kubernetes.io/projected/f64ac1ba-c007-4df3-8952-e65b53e18d91-kube-api-access-5l9z7\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954195 4896 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954207 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954219 4896 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954231 4896 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f64ac1ba-c007-4df3-8952-e65b53e18d91-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:16 crc kubenswrapper[4896]: I0126 15:53:16.954242 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f64ac1ba-c007-4df3-8952-e65b53e18d91-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391040 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544466ff54-8zfqf_f64ac1ba-c007-4df3-8952-e65b53e18d91/console/0.log" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391093 4896 generic.go:334] "Generic (PLEG): container finished" podID="f64ac1ba-c007-4df3-8952-e65b53e18d91" containerID="045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06" exitCode=2 Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391130 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544466ff54-8zfqf" event={"ID":"f64ac1ba-c007-4df3-8952-e65b53e18d91","Type":"ContainerDied","Data":"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06"} Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391161 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544466ff54-8zfqf" event={"ID":"f64ac1ba-c007-4df3-8952-e65b53e18d91","Type":"ContainerDied","Data":"d78eda112e6b0509dc203acb6f54bb432fba462e4d582514ca4a1012ca5abba2"} Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391181 4896 scope.go:117] "RemoveContainer" containerID="045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.391356 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544466ff54-8zfqf" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.427772 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.431646 4896 scope.go:117] "RemoveContainer" containerID="045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06" Jan 26 15:53:17 crc kubenswrapper[4896]: E0126 15:53:17.432145 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06\": container with ID starting with 045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06 not found: ID does not exist" containerID="045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.432180 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06"} err="failed to get container status \"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06\": rpc error: code = NotFound desc = could not find container \"045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06\": container with ID starting with 045647bc0f5ad955b304c99ae90bfe7fa785c29aedd528f33472fe3559c0de06 not found: ID does not exist" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.434505 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-544466ff54-8zfqf"] Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.746965 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749"] Jan 26 15:53:17 crc kubenswrapper[4896]: E0126 15:53:17.747605 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64ac1ba-c007-4df3-8952-e65b53e18d91" containerName="console" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.747625 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64ac1ba-c007-4df3-8952-e65b53e18d91" containerName="console" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.747813 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64ac1ba-c007-4df3-8952-e65b53e18d91" containerName="console" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.749073 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.750900 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.758568 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749"] Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.871907 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnclt\" (UniqueName: \"kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.871975 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.872050 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.974137 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnclt\" (UniqueName: \"kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.974217 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.974293 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.974913 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:17 crc kubenswrapper[4896]: I0126 15:53:17.975198 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:18 crc kubenswrapper[4896]: I0126 15:53:18.003342 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnclt\" (UniqueName: \"kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:18 crc kubenswrapper[4896]: I0126 15:53:18.065198 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:18 crc kubenswrapper[4896]: I0126 15:53:18.478557 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749"] Jan 26 15:53:18 crc kubenswrapper[4896]: I0126 15:53:18.772756 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64ac1ba-c007-4df3-8952-e65b53e18d91" path="/var/lib/kubelet/pods/f64ac1ba-c007-4df3-8952-e65b53e18d91/volumes" Jan 26 15:53:19 crc kubenswrapper[4896]: I0126 15:53:19.407406 4896 generic.go:334] "Generic (PLEG): container finished" podID="67c7dab8-b04f-415d-b859-138fa4c24117" containerID="473157b0a7947b6bb531d543c9c68e8c0ba264937b26c52d7c975dd1ee23c104" exitCode=0 Jan 26 15:53:19 crc kubenswrapper[4896]: I0126 15:53:19.407473 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" event={"ID":"67c7dab8-b04f-415d-b859-138fa4c24117","Type":"ContainerDied","Data":"473157b0a7947b6bb531d543c9c68e8c0ba264937b26c52d7c975dd1ee23c104"} Jan 26 15:53:19 crc kubenswrapper[4896]: I0126 15:53:19.407752 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" event={"ID":"67c7dab8-b04f-415d-b859-138fa4c24117","Type":"ContainerStarted","Data":"11e09eec7a1186128e2534cfb678b61979a3abb6939640de1b3e10e7d332035c"} Jan 26 15:53:21 crc kubenswrapper[4896]: I0126 15:53:21.425106 4896 generic.go:334] "Generic (PLEG): container finished" podID="67c7dab8-b04f-415d-b859-138fa4c24117" containerID="11bdd1ad73dc2d86659c880da0177f0334e813191ba6c38ad80b39556d941922" exitCode=0 Jan 26 15:53:21 crc kubenswrapper[4896]: I0126 15:53:21.425166 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" event={"ID":"67c7dab8-b04f-415d-b859-138fa4c24117","Type":"ContainerDied","Data":"11bdd1ad73dc2d86659c880da0177f0334e813191ba6c38ad80b39556d941922"} Jan 26 15:53:22 crc kubenswrapper[4896]: I0126 15:53:22.435959 4896 generic.go:334] "Generic (PLEG): container finished" podID="67c7dab8-b04f-415d-b859-138fa4c24117" containerID="0ee71db9353f0c5e080407a6763c286534af9e1eb2db683f8e88e2e68630fdc4" exitCode=0 Jan 26 15:53:22 crc kubenswrapper[4896]: I0126 15:53:22.436058 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" event={"ID":"67c7dab8-b04f-415d-b859-138fa4c24117","Type":"ContainerDied","Data":"0ee71db9353f0c5e080407a6763c286534af9e1eb2db683f8e88e2e68630fdc4"} Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.821555 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.883172 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle\") pod \"67c7dab8-b04f-415d-b859-138fa4c24117\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.883343 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util\") pod \"67c7dab8-b04f-415d-b859-138fa4c24117\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.883432 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnclt\" (UniqueName: \"kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt\") pod \"67c7dab8-b04f-415d-b859-138fa4c24117\" (UID: \"67c7dab8-b04f-415d-b859-138fa4c24117\") " Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.884394 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle" (OuterVolumeSpecName: "bundle") pod "67c7dab8-b04f-415d-b859-138fa4c24117" (UID: "67c7dab8-b04f-415d-b859-138fa4c24117"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.889454 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt" (OuterVolumeSpecName: "kube-api-access-bnclt") pod "67c7dab8-b04f-415d-b859-138fa4c24117" (UID: "67c7dab8-b04f-415d-b859-138fa4c24117"). InnerVolumeSpecName "kube-api-access-bnclt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.899622 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util" (OuterVolumeSpecName: "util") pod "67c7dab8-b04f-415d-b859-138fa4c24117" (UID: "67c7dab8-b04f-415d-b859-138fa4c24117"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.985786 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.985852 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67c7dab8-b04f-415d-b859-138fa4c24117-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:23 crc kubenswrapper[4896]: I0126 15:53:23.985867 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnclt\" (UniqueName: \"kubernetes.io/projected/67c7dab8-b04f-415d-b859-138fa4c24117-kube-api-access-bnclt\") on node \"crc\" DevicePath \"\"" Jan 26 15:53:24 crc kubenswrapper[4896]: I0126 15:53:24.455136 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" event={"ID":"67c7dab8-b04f-415d-b859-138fa4c24117","Type":"ContainerDied","Data":"11e09eec7a1186128e2534cfb678b61979a3abb6939640de1b3e10e7d332035c"} Jan 26 15:53:24 crc kubenswrapper[4896]: I0126 15:53:24.455176 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749" Jan 26 15:53:24 crc kubenswrapper[4896]: I0126 15:53:24.455232 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e09eec7a1186128e2534cfb678b61979a3abb6939640de1b3e10e7d332035c" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.426373 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894"] Jan 26 15:53:32 crc kubenswrapper[4896]: E0126 15:53:32.427325 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="util" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.427342 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="util" Jan 26 15:53:32 crc kubenswrapper[4896]: E0126 15:53:32.427363 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="extract" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.427371 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="extract" Jan 26 15:53:32 crc kubenswrapper[4896]: E0126 15:53:32.427399 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="pull" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.427409 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="pull" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.427576 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c7dab8-b04f-415d-b859-138fa4c24117" containerName="extract" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.428248 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.430553 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.430842 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.437717 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hvw4c" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.439731 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.439943 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.457301 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894"] Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.534307 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-webhook-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.534479 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfn2v\" (UniqueName: \"kubernetes.io/projected/8a59a62f-3748-43b7-baa0-cd121242caea-kube-api-access-sfn2v\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.534534 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.636309 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-webhook-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.636398 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfn2v\" (UniqueName: \"kubernetes.io/projected/8a59a62f-3748-43b7-baa0-cd121242caea-kube-api-access-sfn2v\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.636418 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.642424 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-webhook-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.655139 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a59a62f-3748-43b7-baa0-cd121242caea-apiservice-cert\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.655322 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfn2v\" (UniqueName: \"kubernetes.io/projected/8a59a62f-3748-43b7-baa0-cd121242caea-kube-api-access-sfn2v\") pod \"metallb-operator-controller-manager-5c8c8d84d-97894\" (UID: \"8a59a62f-3748-43b7-baa0-cd121242caea\") " pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.752228 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.805630 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2"] Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.807314 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.811373 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jr94s" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.811383 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.811434 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.838964 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-webhook-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.839276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-apiservice-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.840718 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4v5\" (UniqueName: \"kubernetes.io/projected/9054b98a-1821-4a98-881a-37475dea15e9-kube-api-access-nd4v5\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.840879 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2"] Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.944528 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-apiservice-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.944591 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-webhook-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.944692 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4v5\" (UniqueName: \"kubernetes.io/projected/9054b98a-1821-4a98-881a-37475dea15e9-kube-api-access-nd4v5\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.956280 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-webhook-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.956629 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9054b98a-1821-4a98-881a-37475dea15e9-apiservice-cert\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:32 crc kubenswrapper[4896]: I0126 15:53:32.966154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4v5\" (UniqueName: \"kubernetes.io/projected/9054b98a-1821-4a98-881a-37475dea15e9-kube-api-access-nd4v5\") pod \"metallb-operator-webhook-server-7f478b8cc-mpsb2\" (UID: \"9054b98a-1821-4a98-881a-37475dea15e9\") " pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:33 crc kubenswrapper[4896]: I0126 15:53:33.145023 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:33 crc kubenswrapper[4896]: I0126 15:53:33.335826 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894"] Jan 26 15:53:33 crc kubenswrapper[4896]: W0126 15:53:33.353244 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-2f5eaf6e36b17a8dbbd1fe5f2e18c21fd285f096abed13ebc255f0a5f64b27ec WatchSource:0}: Error finding container 2f5eaf6e36b17a8dbbd1fe5f2e18c21fd285f096abed13ebc255f0a5f64b27ec: Status 404 returned error can't find the container with id 2f5eaf6e36b17a8dbbd1fe5f2e18c21fd285f096abed13ebc255f0a5f64b27ec Jan 26 15:53:33 crc kubenswrapper[4896]: I0126 15:53:33.526406 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" event={"ID":"8a59a62f-3748-43b7-baa0-cd121242caea","Type":"ContainerStarted","Data":"2f5eaf6e36b17a8dbbd1fe5f2e18c21fd285f096abed13ebc255f0a5f64b27ec"} Jan 26 15:53:33 crc kubenswrapper[4896]: W0126 15:53:33.624839 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9054b98a_1821_4a98_881a_37475dea15e9.slice/crio-e1bb69655f084089fe0cdffcd8fad9c39e81b6e8d23ced969a16b184f2d77191 WatchSource:0}: Error finding container e1bb69655f084089fe0cdffcd8fad9c39e81b6e8d23ced969a16b184f2d77191: Status 404 returned error can't find the container with id e1bb69655f084089fe0cdffcd8fad9c39e81b6e8d23ced969a16b184f2d77191 Jan 26 15:53:33 crc kubenswrapper[4896]: I0126 15:53:33.625514 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2"] Jan 26 15:53:34 crc kubenswrapper[4896]: I0126 15:53:34.544612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" event={"ID":"9054b98a-1821-4a98-881a-37475dea15e9","Type":"ContainerStarted","Data":"e1bb69655f084089fe0cdffcd8fad9c39e81b6e8d23ced969a16b184f2d77191"} Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.708314 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" event={"ID":"8a59a62f-3748-43b7-baa0-cd121242caea","Type":"ContainerStarted","Data":"7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923"} Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.709248 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.718398 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" event={"ID":"9054b98a-1821-4a98-881a-37475dea15e9","Type":"ContainerStarted","Data":"83dd758e074df6e4ebffc46939ee2c0cede0dff5c10c3a6bdf6a8e01eacfd76c"} Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.719296 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.736325 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" podStartSLOduration=3.365906133 podStartE2EDuration="11.736306338s" podCreationTimestamp="2026-01-26 15:53:32 +0000 UTC" firstStartedPulling="2026-01-26 15:53:33.356202321 +0000 UTC m=+1171.138082714" lastFinishedPulling="2026-01-26 15:53:41.726602526 +0000 UTC m=+1179.508482919" observedRunningTime="2026-01-26 15:53:43.732006925 +0000 UTC m=+1181.513887318" watchObservedRunningTime="2026-01-26 15:53:43.736306338 +0000 UTC m=+1181.518186731" Jan 26 15:53:43 crc kubenswrapper[4896]: I0126 15:53:43.768308 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" podStartSLOduration=3.531620556 podStartE2EDuration="11.768285962s" podCreationTimestamp="2026-01-26 15:53:32 +0000 UTC" firstStartedPulling="2026-01-26 15:53:33.628865414 +0000 UTC m=+1171.410745807" lastFinishedPulling="2026-01-26 15:53:41.86553082 +0000 UTC m=+1179.647411213" observedRunningTime="2026-01-26 15:53:43.762446302 +0000 UTC m=+1181.544326695" watchObservedRunningTime="2026-01-26 15:53:43.768285962 +0000 UTC m=+1181.550166355" Jan 26 15:53:53 crc kubenswrapper[4896]: I0126 15:53:53.150539 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f478b8cc-mpsb2" Jan 26 15:54:12 crc kubenswrapper[4896]: I0126 15:54:12.755069 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.631323 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-klnvj"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.634840 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.637542 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6shs6" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.638317 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.638840 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.639066 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.639448 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.641369 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.652834 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.731121 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tkm4l"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.732777 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.735367 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.735635 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.736194 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ztzz2" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.736386 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741238 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wtt\" (UniqueName: \"kubernetes.io/projected/07b177bf-083a-4714-bf1d-c07315a750d7-kube-api-access-t2wtt\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741290 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741333 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741365 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-reloader\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741394 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzj5n\" (UniqueName: \"kubernetes.io/projected/d18b5dee-5e82-4bf6-baf3-b3bc539da480-kube-api-access-dzj5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741533 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/07b177bf-083a-4714-bf1d-c07315a750d7-frr-startup\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741557 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-conf\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741631 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-metrics\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.741660 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-sockets\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.748240 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-nqbzj"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.749882 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.752416 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.772318 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nqbzj"] Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843277 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-cert\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843375 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wtt\" (UniqueName: \"kubernetes.io/projected/07b177bf-083a-4714-bf1d-c07315a750d7-kube-api-access-t2wtt\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843420 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz42s\" (UniqueName: \"kubernetes.io/projected/2502287e-cdc6-4a66-8f39-278e9560c7bf-kube-api-access-kz42s\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843523 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843566 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-reloader\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843612 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5k8t\" (UniqueName: \"kubernetes.io/projected/6c994fcb-b747-4440-a355-e89ada0aad52-kube-api-access-d5k8t\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843671 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzj5n\" (UniqueName: \"kubernetes.io/projected/d18b5dee-5e82-4bf6-baf3-b3bc539da480-kube-api-access-dzj5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843722 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/07b177bf-083a-4714-bf1d-c07315a750d7-frr-startup\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843749 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-conf\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843771 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-metrics-certs\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-metrics-certs\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843900 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-metrics\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.843929 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-sockets\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.844008 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c994fcb-b747-4440-a355-e89ada0aad52-metallb-excludel2\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.844056 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.844616 4896 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.844667 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert podName:d18b5dee-5e82-4bf6-baf3-b3bc539da480 nodeName:}" failed. No retries permitted until 2026-01-26 15:54:14.344650989 +0000 UTC m=+1212.126531382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert") pod "frr-k8s-webhook-server-7df86c4f6c-86mhf" (UID: "d18b5dee-5e82-4bf6-baf3-b3bc539da480") : secret "frr-k8s-webhook-server-cert" not found Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.844899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-conf\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.844987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/07b177bf-083a-4714-bf1d-c07315a750d7-frr-startup\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.845134 4896 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.845272 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs podName:07b177bf-083a-4714-bf1d-c07315a750d7 nodeName:}" failed. No retries permitted until 2026-01-26 15:54:14.345249964 +0000 UTC m=+1212.127130427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs") pod "frr-k8s-klnvj" (UID: "07b177bf-083a-4714-bf1d-c07315a750d7") : secret "frr-k8s-certs-secret" not found Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.845559 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-metrics\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.846145 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-reloader\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.846159 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/07b177bf-083a-4714-bf1d-c07315a750d7-frr-sockets\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.868394 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wtt\" (UniqueName: \"kubernetes.io/projected/07b177bf-083a-4714-bf1d-c07315a750d7-kube-api-access-t2wtt\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.872455 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzj5n\" (UniqueName: \"kubernetes.io/projected/d18b5dee-5e82-4bf6-baf3-b3bc539da480-kube-api-access-dzj5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-cert\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946689 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz42s\" (UniqueName: \"kubernetes.io/projected/2502287e-cdc6-4a66-8f39-278e9560c7bf-kube-api-access-kz42s\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946771 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5k8t\" (UniqueName: \"kubernetes.io/projected/6c994fcb-b747-4440-a355-e89ada0aad52-kube-api-access-d5k8t\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946815 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-metrics-certs\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946876 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-metrics-certs\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.946956 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c994fcb-b747-4440-a355-e89ada0aad52-metallb-excludel2\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.947008 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.947186 4896 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 15:54:13 crc kubenswrapper[4896]: E0126 15:54:13.947264 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist podName:6c994fcb-b747-4440-a355-e89ada0aad52 nodeName:}" failed. No retries permitted until 2026-01-26 15:54:14.447246684 +0000 UTC m=+1212.229127077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist") pod "speaker-tkm4l" (UID: "6c994fcb-b747-4440-a355-e89ada0aad52") : secret "metallb-memberlist" not found Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.948296 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c994fcb-b747-4440-a355-e89ada0aad52-metallb-excludel2\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.951940 4896 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.952121 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-metrics-certs\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.962018 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-metrics-certs\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.962327 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2502287e-cdc6-4a66-8f39-278e9560c7bf-cert\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.965619 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5k8t\" (UniqueName: \"kubernetes.io/projected/6c994fcb-b747-4440-a355-e89ada0aad52-kube-api-access-d5k8t\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:13 crc kubenswrapper[4896]: I0126 15:54:13.970071 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz42s\" (UniqueName: \"kubernetes.io/projected/2502287e-cdc6-4a66-8f39-278e9560c7bf-kube-api-access-kz42s\") pod \"controller-6968d8fdc4-nqbzj\" (UID: \"2502287e-cdc6-4a66-8f39-278e9560c7bf\") " pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.065766 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.353024 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.353391 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.360277 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07b177bf-083a-4714-bf1d-c07315a750d7-metrics-certs\") pod \"frr-k8s-klnvj\" (UID: \"07b177bf-083a-4714-bf1d-c07315a750d7\") " pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.360344 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18b5dee-5e82-4bf6-baf3-b3bc539da480-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-86mhf\" (UID: \"d18b5dee-5e82-4bf6-baf3-b3bc539da480\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.455072 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:14 crc kubenswrapper[4896]: E0126 15:54:14.455610 4896 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 15:54:14 crc kubenswrapper[4896]: E0126 15:54:14.455684 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist podName:6c994fcb-b747-4440-a355-e89ada0aad52 nodeName:}" failed. No retries permitted until 2026-01-26 15:54:15.455664314 +0000 UTC m=+1213.237544707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist") pod "speaker-tkm4l" (UID: "6c994fcb-b747-4440-a355-e89ada0aad52") : secret "metallb-memberlist" not found Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.563297 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.564712 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-nqbzj"] Jan 26 15:54:14 crc kubenswrapper[4896]: I0126 15:54:14.572124 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.109876 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf"] Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.424956 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"85ee94d6580a4495ff417f70eccdb1381bb24ea3cbca25f19934e61e422f7e11"} Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.426288 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" event={"ID":"d18b5dee-5e82-4bf6-baf3-b3bc539da480","Type":"ContainerStarted","Data":"caa8a656b1af595a15cef9a4942adc405d9dc0a52300a8c8d8ad5f693f008b7c"} Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.428114 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nqbzj" event={"ID":"2502287e-cdc6-4a66-8f39-278e9560c7bf","Type":"ContainerStarted","Data":"e90a2c1aa02333a429b417a6be5617efa622c657c0b64423745fc38e93fa8cb8"} Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.428142 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nqbzj" event={"ID":"2502287e-cdc6-4a66-8f39-278e9560c7bf","Type":"ContainerStarted","Data":"9d49e785a04c1976d0383eac19c108ced423abb988ad7c856740609256948150"} Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.428153 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-nqbzj" event={"ID":"2502287e-cdc6-4a66-8f39-278e9560c7bf","Type":"ContainerStarted","Data":"251e992a76bb80ed369f700c5502e6c20204bb9ce4babe324ba40c7f2ca050c9"} Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.428408 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.449863 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-nqbzj" podStartSLOduration=2.449846205 podStartE2EDuration="2.449846205s" podCreationTimestamp="2026-01-26 15:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:54:15.446754161 +0000 UTC m=+1213.228634554" watchObservedRunningTime="2026-01-26 15:54:15.449846205 +0000 UTC m=+1213.231726598" Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.474516 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.495362 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c994fcb-b747-4440-a355-e89ada0aad52-memberlist\") pod \"speaker-tkm4l\" (UID: \"6c994fcb-b747-4440-a355-e89ada0aad52\") " pod="metallb-system/speaker-tkm4l" Jan 26 15:54:15 crc kubenswrapper[4896]: I0126 15:54:15.549497 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tkm4l" Jan 26 15:54:15 crc kubenswrapper[4896]: W0126 15:54:15.579762 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c994fcb_b747_4440_a355_e89ada0aad52.slice/crio-e4f263336d1a87503d7f9cd59ceb000cd2c090e2582e3180e1d9eceea009f801 WatchSource:0}: Error finding container e4f263336d1a87503d7f9cd59ceb000cd2c090e2582e3180e1d9eceea009f801: Status 404 returned error can't find the container with id e4f263336d1a87503d7f9cd59ceb000cd2c090e2582e3180e1d9eceea009f801 Jan 26 15:54:16 crc kubenswrapper[4896]: I0126 15:54:16.496383 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkm4l" event={"ID":"6c994fcb-b747-4440-a355-e89ada0aad52","Type":"ContainerStarted","Data":"eef0955100a9a30bcc531efcb6bba7bdc5d89284a99995fa5a9422069c3f93ef"} Jan 26 15:54:16 crc kubenswrapper[4896]: I0126 15:54:16.496785 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkm4l" event={"ID":"6c994fcb-b747-4440-a355-e89ada0aad52","Type":"ContainerStarted","Data":"b38c45585c58301e22efca704101c3d44f5e3f3f2d0be505b3e2b3b10fa19ba8"} Jan 26 15:54:16 crc kubenswrapper[4896]: I0126 15:54:16.496804 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkm4l" event={"ID":"6c994fcb-b747-4440-a355-e89ada0aad52","Type":"ContainerStarted","Data":"e4f263336d1a87503d7f9cd59ceb000cd2c090e2582e3180e1d9eceea009f801"} Jan 26 15:54:16 crc kubenswrapper[4896]: I0126 15:54:16.497719 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tkm4l" Jan 26 15:54:16 crc kubenswrapper[4896]: I0126 15:54:16.530901 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tkm4l" podStartSLOduration=3.530854392 podStartE2EDuration="3.530854392s" podCreationTimestamp="2026-01-26 15:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:54:16.520435373 +0000 UTC m=+1214.302315786" watchObservedRunningTime="2026-01-26 15:54:16.530854392 +0000 UTC m=+1214.312734795" Jan 26 15:54:18 crc kubenswrapper[4896]: I0126 15:54:18.814109 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:54:18 crc kubenswrapper[4896]: I0126 15:54:18.814883 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:54:24 crc kubenswrapper[4896]: I0126 15:54:24.070034 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-nqbzj" Jan 26 15:54:25 crc kubenswrapper[4896]: I0126 15:54:25.564754 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tkm4l" Jan 26 15:54:27 crc kubenswrapper[4896]: I0126 15:54:27.754169 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" event={"ID":"d18b5dee-5e82-4bf6-baf3-b3bc539da480","Type":"ContainerStarted","Data":"b72dff7162cab9bc1f3f9e18c5ce208635399334812d0cbc05d3d6e55b7f9e86"} Jan 26 15:54:27 crc kubenswrapper[4896]: I0126 15:54:27.755811 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:27 crc kubenswrapper[4896]: I0126 15:54:27.756381 4896 generic.go:334] "Generic (PLEG): container finished" podID="07b177bf-083a-4714-bf1d-c07315a750d7" containerID="d5b964203d92b45e6c3eb37847ebfe42a170e080ab6e6e2eb4fee526ed853bf2" exitCode=0 Jan 26 15:54:27 crc kubenswrapper[4896]: I0126 15:54:27.756413 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerDied","Data":"d5b964203d92b45e6c3eb37847ebfe42a170e080ab6e6e2eb4fee526ed853bf2"} Jan 26 15:54:27 crc kubenswrapper[4896]: I0126 15:54:27.811217 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" podStartSLOduration=2.470193862 podStartE2EDuration="14.811193851s" podCreationTimestamp="2026-01-26 15:54:13 +0000 UTC" firstStartedPulling="2026-01-26 15:54:15.128729954 +0000 UTC m=+1212.910610347" lastFinishedPulling="2026-01-26 15:54:27.469729943 +0000 UTC m=+1225.251610336" observedRunningTime="2026-01-26 15:54:27.782067274 +0000 UTC m=+1225.563947667" watchObservedRunningTime="2026-01-26 15:54:27.811193851 +0000 UTC m=+1225.593074234" Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.766089 4896 generic.go:334] "Generic (PLEG): container finished" podID="07b177bf-083a-4714-bf1d-c07315a750d7" containerID="9dfe0cd3435850eb1cf23db8c5823f0f8647fec8b92ef8ac691061e78bdb0fb9" exitCode=0 Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.768676 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerDied","Data":"9dfe0cd3435850eb1cf23db8c5823f0f8647fec8b92ef8ac691061e78bdb0fb9"} Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.925700 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.927455 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.930267 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vs2zn" Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.930678 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.931146 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 15:54:28 crc kubenswrapper[4896]: I0126 15:54:28.948307 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.037439 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpb7f\" (UniqueName: \"kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f\") pod \"openstack-operator-index-bwfft\" (UID: \"80a55838-bd4b-418e-b674-b0941c4f6012\") " pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.138979 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpb7f\" (UniqueName: \"kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f\") pod \"openstack-operator-index-bwfft\" (UID: \"80a55838-bd4b-418e-b674-b0941c4f6012\") " pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.171671 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpb7f\" (UniqueName: \"kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f\") pod \"openstack-operator-index-bwfft\" (UID: \"80a55838-bd4b-418e-b674-b0941c4f6012\") " pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.258047 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.708817 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:29 crc kubenswrapper[4896]: W0126 15:54:29.720606 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80a55838_bd4b_418e_b674_b0941c4f6012.slice/crio-9f02124605e092349f4c6bd90bffa0ffeebd4d3d70f75a3ef5fdccde3b774c52 WatchSource:0}: Error finding container 9f02124605e092349f4c6bd90bffa0ffeebd4d3d70f75a3ef5fdccde3b774c52: Status 404 returned error can't find the container with id 9f02124605e092349f4c6bd90bffa0ffeebd4d3d70f75a3ef5fdccde3b774c52 Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.776208 4896 generic.go:334] "Generic (PLEG): container finished" podID="07b177bf-083a-4714-bf1d-c07315a750d7" containerID="d2fcd289e7c7270a8cb665d0fbda7c051a7959bd90150875adbbaf8ac0b518f8" exitCode=0 Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.776277 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerDied","Data":"d2fcd289e7c7270a8cb665d0fbda7c051a7959bd90150875adbbaf8ac0b518f8"} Jan 26 15:54:29 crc kubenswrapper[4896]: I0126 15:54:29.777683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bwfft" event={"ID":"80a55838-bd4b-418e-b674-b0941c4f6012","Type":"ContainerStarted","Data":"9f02124605e092349f4c6bd90bffa0ffeebd4d3d70f75a3ef5fdccde3b774c52"} Jan 26 15:54:30 crc kubenswrapper[4896]: I0126 15:54:30.792636 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"ba23e3cb9a99ea95ddf39e15b6fc64256968a0bbc61cb82e4db0dcb9b991e897"} Jan 26 15:54:30 crc kubenswrapper[4896]: I0126 15:54:30.793398 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"d2f741b357b7c140b775e36e35cb514b31ac6e1a67a7e42b55fe1441f7c19681"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.295105 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.810860 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bwfft" event={"ID":"80a55838-bd4b-418e-b674-b0941c4f6012","Type":"ContainerStarted","Data":"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.817366 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"ef3ed6e6d6245bf8f4b8978abb6c5e64f7ec42897ae75d1fd95f6a6fad2fcf1a"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.817406 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"457a160042ad08426f12f655cc583bc218c76c89f3bec69fe996947c8cf0a450"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.817416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"67ec98a668ae57dc014eb3ff0d0e30889332a36829e5a3721dcc1ddf8b35ac27"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.817425 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-klnvj" event={"ID":"07b177bf-083a-4714-bf1d-c07315a750d7","Type":"ContainerStarted","Data":"e340c1ed3041336fa7477795030b4708b0c1aae930e519230f40dd409594779c"} Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.818273 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.829346 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bwfft" podStartSLOduration=2.6030610899999997 podStartE2EDuration="4.829328901s" podCreationTimestamp="2026-01-26 15:54:28 +0000 UTC" firstStartedPulling="2026-01-26 15:54:29.722741763 +0000 UTC m=+1227.504622156" lastFinishedPulling="2026-01-26 15:54:31.949009574 +0000 UTC m=+1229.730889967" observedRunningTime="2026-01-26 15:54:32.826687408 +0000 UTC m=+1230.608567801" watchObservedRunningTime="2026-01-26 15:54:32.829328901 +0000 UTC m=+1230.611209294" Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.854878 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-klnvj" podStartSLOduration=7.264108709 podStartE2EDuration="19.854857851s" podCreationTimestamp="2026-01-26 15:54:13 +0000 UTC" firstStartedPulling="2026-01-26 15:54:14.850387866 +0000 UTC m=+1212.632268249" lastFinishedPulling="2026-01-26 15:54:27.441136998 +0000 UTC m=+1225.223017391" observedRunningTime="2026-01-26 15:54:32.851307216 +0000 UTC m=+1230.633187609" watchObservedRunningTime="2026-01-26 15:54:32.854857851 +0000 UTC m=+1230.636738244" Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.903188 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sf56h"] Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.904609 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.911208 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sf56h"] Jan 26 15:54:32 crc kubenswrapper[4896]: I0126 15:54:32.994657 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwl9\" (UniqueName: \"kubernetes.io/projected/2480d4e9-511f-4e9a-9a73-2e10c1fa3da7-kube-api-access-pmwl9\") pod \"openstack-operator-index-sf56h\" (UID: \"2480d4e9-511f-4e9a-9a73-2e10c1fa3da7\") " pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.095738 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmwl9\" (UniqueName: \"kubernetes.io/projected/2480d4e9-511f-4e9a-9a73-2e10c1fa3da7-kube-api-access-pmwl9\") pod \"openstack-operator-index-sf56h\" (UID: \"2480d4e9-511f-4e9a-9a73-2e10c1fa3da7\") " pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.118615 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmwl9\" (UniqueName: \"kubernetes.io/projected/2480d4e9-511f-4e9a-9a73-2e10c1fa3da7-kube-api-access-pmwl9\") pod \"openstack-operator-index-sf56h\" (UID: \"2480d4e9-511f-4e9a-9a73-2e10c1fa3da7\") " pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.225230 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.780743 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sf56h"] Jan 26 15:54:33 crc kubenswrapper[4896]: W0126 15:54:33.801049 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2480d4e9_511f_4e9a_9a73_2e10c1fa3da7.slice/crio-a89c5eecc356dc92a98b6084115280b0669f8839df3a75beea89acd1b32a2826 WatchSource:0}: Error finding container a89c5eecc356dc92a98b6084115280b0669f8839df3a75beea89acd1b32a2826: Status 404 returned error can't find the container with id a89c5eecc356dc92a98b6084115280b0669f8839df3a75beea89acd1b32a2826 Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.827289 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sf56h" event={"ID":"2480d4e9-511f-4e9a-9a73-2e10c1fa3da7","Type":"ContainerStarted","Data":"a89c5eecc356dc92a98b6084115280b0669f8839df3a75beea89acd1b32a2826"} Jan 26 15:54:33 crc kubenswrapper[4896]: I0126 15:54:33.827550 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bwfft" podUID="80a55838-bd4b-418e-b674-b0941c4f6012" containerName="registry-server" containerID="cri-o://3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136" gracePeriod=2 Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.329418 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.524038 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpb7f\" (UniqueName: \"kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f\") pod \"80a55838-bd4b-418e-b674-b0941c4f6012\" (UID: \"80a55838-bd4b-418e-b674-b0941c4f6012\") " Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.530024 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f" (OuterVolumeSpecName: "kube-api-access-mpb7f") pod "80a55838-bd4b-418e-b674-b0941c4f6012" (UID: "80a55838-bd4b-418e-b674-b0941c4f6012"). InnerVolumeSpecName "kube-api-access-mpb7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.564229 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.607969 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.628050 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpb7f\" (UniqueName: \"kubernetes.io/projected/80a55838-bd4b-418e-b674-b0941c4f6012-kube-api-access-mpb7f\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.836901 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sf56h" event={"ID":"2480d4e9-511f-4e9a-9a73-2e10c1fa3da7","Type":"ContainerStarted","Data":"3dcdc75e60a5f2557ea55f88185a9b5e79af1a6c3d5684e4d11196a69f897b4c"} Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.838396 4896 generic.go:334] "Generic (PLEG): container finished" podID="80a55838-bd4b-418e-b674-b0941c4f6012" containerID="3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136" exitCode=0 Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.838462 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bwfft" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.838503 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bwfft" event={"ID":"80a55838-bd4b-418e-b674-b0941c4f6012","Type":"ContainerDied","Data":"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136"} Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.838522 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bwfft" event={"ID":"80a55838-bd4b-418e-b674-b0941c4f6012","Type":"ContainerDied","Data":"9f02124605e092349f4c6bd90bffa0ffeebd4d3d70f75a3ef5fdccde3b774c52"} Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.838537 4896 scope.go:117] "RemoveContainer" containerID="3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.865148 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sf56h" podStartSLOduration=2.797581471 podStartE2EDuration="2.865132566s" podCreationTimestamp="2026-01-26 15:54:32 +0000 UTC" firstStartedPulling="2026-01-26 15:54:33.805692365 +0000 UTC m=+1231.587572758" lastFinishedPulling="2026-01-26 15:54:33.87324346 +0000 UTC m=+1231.655123853" observedRunningTime="2026-01-26 15:54:34.856734875 +0000 UTC m=+1232.638615268" watchObservedRunningTime="2026-01-26 15:54:34.865132566 +0000 UTC m=+1232.647012959" Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.914652 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:34 crc kubenswrapper[4896]: I0126 15:54:34.922733 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bwfft"] Jan 26 15:54:35 crc kubenswrapper[4896]: I0126 15:54:35.461572 4896 scope.go:117] "RemoveContainer" containerID="3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136" Jan 26 15:54:35 crc kubenswrapper[4896]: E0126 15:54:35.462491 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136\": container with ID starting with 3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136 not found: ID does not exist" containerID="3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136" Jan 26 15:54:35 crc kubenswrapper[4896]: I0126 15:54:35.462572 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136"} err="failed to get container status \"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136\": rpc error: code = NotFound desc = could not find container \"3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136\": container with ID starting with 3bfd6aafad827ca88cd25ce1fa5f9c587f4bcbfe84b17f7f90a217293986d136 not found: ID does not exist" Jan 26 15:54:36 crc kubenswrapper[4896]: I0126 15:54:36.768140 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a55838-bd4b-418e-b674-b0941c4f6012" path="/var/lib/kubelet/pods/80a55838-bd4b-418e-b674-b0941c4f6012/volumes" Jan 26 15:54:43 crc kubenswrapper[4896]: I0126 15:54:43.225503 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:43 crc kubenswrapper[4896]: I0126 15:54:43.226204 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:43 crc kubenswrapper[4896]: I0126 15:54:43.261695 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:43 crc kubenswrapper[4896]: I0126 15:54:43.975949 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sf56h" Jan 26 15:54:44 crc kubenswrapper[4896]: I0126 15:54:44.567350 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-klnvj" Jan 26 15:54:44 crc kubenswrapper[4896]: I0126 15:54:44.583016 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-86mhf" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.538022 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8"] Jan 26 15:54:45 crc kubenswrapper[4896]: E0126 15:54:45.538730 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a55838-bd4b-418e-b674-b0941c4f6012" containerName="registry-server" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.538751 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a55838-bd4b-418e-b674-b0941c4f6012" containerName="registry-server" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.538892 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a55838-bd4b-418e-b674-b0941c4f6012" containerName="registry-server" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.540100 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.543298 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zr7hh" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.547223 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8"] Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.671824 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.671921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.671949 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskqq\" (UniqueName: \"kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.773364 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.773461 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskqq\" (UniqueName: \"kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.773716 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.773995 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.774370 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.794945 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskqq\" (UniqueName: \"kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq\") pod \"c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:45 crc kubenswrapper[4896]: I0126 15:54:45.872445 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:46 crc kubenswrapper[4896]: I0126 15:54:46.424359 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8"] Jan 26 15:54:46 crc kubenswrapper[4896]: I0126 15:54:46.956134 4896 generic.go:334] "Generic (PLEG): container finished" podID="ad24da91-ff62-4407-91cb-a321d268661e" containerID="2f261033a851959b0b6dfbee0fe220f6a28654596faedd040b772a22df3a951b" exitCode=0 Jan 26 15:54:46 crc kubenswrapper[4896]: I0126 15:54:46.956263 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" event={"ID":"ad24da91-ff62-4407-91cb-a321d268661e","Type":"ContainerDied","Data":"2f261033a851959b0b6dfbee0fe220f6a28654596faedd040b772a22df3a951b"} Jan 26 15:54:46 crc kubenswrapper[4896]: I0126 15:54:46.956671 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" event={"ID":"ad24da91-ff62-4407-91cb-a321d268661e","Type":"ContainerStarted","Data":"26afe942c237229fe95ab62c745cdc821b166533b9bd2563ea450aa09a6e25f6"} Jan 26 15:54:47 crc kubenswrapper[4896]: I0126 15:54:47.966926 4896 generic.go:334] "Generic (PLEG): container finished" podID="ad24da91-ff62-4407-91cb-a321d268661e" containerID="f98d73969aa50ad2b9cc452174f21f2dc8af18da09f4ba586f1513ae7409fc75" exitCode=0 Jan 26 15:54:47 crc kubenswrapper[4896]: I0126 15:54:47.967037 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" event={"ID":"ad24da91-ff62-4407-91cb-a321d268661e","Type":"ContainerDied","Data":"f98d73969aa50ad2b9cc452174f21f2dc8af18da09f4ba586f1513ae7409fc75"} Jan 26 15:54:48 crc kubenswrapper[4896]: I0126 15:54:48.819936 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:54:48 crc kubenswrapper[4896]: I0126 15:54:48.820202 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:54:48 crc kubenswrapper[4896]: I0126 15:54:48.978058 4896 generic.go:334] "Generic (PLEG): container finished" podID="ad24da91-ff62-4407-91cb-a321d268661e" containerID="a2ba6f14d4b5ac79c78924671691931c3283f9792172df1ff900b21f7b400baa" exitCode=0 Jan 26 15:54:48 crc kubenswrapper[4896]: I0126 15:54:48.979311 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" event={"ID":"ad24da91-ff62-4407-91cb-a321d268661e","Type":"ContainerDied","Data":"a2ba6f14d4b5ac79c78924671691931c3283f9792172df1ff900b21f7b400baa"} Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.470599 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.560501 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle\") pod \"ad24da91-ff62-4407-91cb-a321d268661e\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.560722 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util\") pod \"ad24da91-ff62-4407-91cb-a321d268661e\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.560764 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tskqq\" (UniqueName: \"kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq\") pod \"ad24da91-ff62-4407-91cb-a321d268661e\" (UID: \"ad24da91-ff62-4407-91cb-a321d268661e\") " Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.561952 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle" (OuterVolumeSpecName: "bundle") pod "ad24da91-ff62-4407-91cb-a321d268661e" (UID: "ad24da91-ff62-4407-91cb-a321d268661e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.569080 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq" (OuterVolumeSpecName: "kube-api-access-tskqq") pod "ad24da91-ff62-4407-91cb-a321d268661e" (UID: "ad24da91-ff62-4407-91cb-a321d268661e"). InnerVolumeSpecName "kube-api-access-tskqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.576108 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util" (OuterVolumeSpecName: "util") pod "ad24da91-ff62-4407-91cb-a321d268661e" (UID: "ad24da91-ff62-4407-91cb-a321d268661e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.663171 4896 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.663229 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tskqq\" (UniqueName: \"kubernetes.io/projected/ad24da91-ff62-4407-91cb-a321d268661e-kube-api-access-tskqq\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.663250 4896 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad24da91-ff62-4407-91cb-a321d268661e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.994669 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" event={"ID":"ad24da91-ff62-4407-91cb-a321d268661e","Type":"ContainerDied","Data":"26afe942c237229fe95ab62c745cdc821b166533b9bd2563ea450aa09a6e25f6"} Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.994718 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26afe942c237229fe95ab62c745cdc821b166533b9bd2563ea450aa09a6e25f6" Jan 26 15:54:50 crc kubenswrapper[4896]: I0126 15:54:50.994764 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.649350 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8"] Jan 26 15:54:57 crc kubenswrapper[4896]: E0126 15:54:57.650174 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="util" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.650186 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="util" Jan 26 15:54:57 crc kubenswrapper[4896]: E0126 15:54:57.650205 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="pull" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.650210 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="pull" Jan 26 15:54:57 crc kubenswrapper[4896]: E0126 15:54:57.650245 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="extract" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.650251 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="extract" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.650410 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad24da91-ff62-4407-91cb-a321d268661e" containerName="extract" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.651063 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.653554 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qs2fl" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.671364 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8"] Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.684174 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npb5q\" (UniqueName: \"kubernetes.io/projected/b54a446e-c064-4867-91fa-55f96ea9d87e-kube-api-access-npb5q\") pod \"openstack-operator-controller-init-8f6df5568-rvvb8\" (UID: \"b54a446e-c064-4867-91fa-55f96ea9d87e\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.786347 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npb5q\" (UniqueName: \"kubernetes.io/projected/b54a446e-c064-4867-91fa-55f96ea9d87e-kube-api-access-npb5q\") pod \"openstack-operator-controller-init-8f6df5568-rvvb8\" (UID: \"b54a446e-c064-4867-91fa-55f96ea9d87e\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.817424 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npb5q\" (UniqueName: \"kubernetes.io/projected/b54a446e-c064-4867-91fa-55f96ea9d87e-kube-api-access-npb5q\") pod \"openstack-operator-controller-init-8f6df5568-rvvb8\" (UID: \"b54a446e-c064-4867-91fa-55f96ea9d87e\") " pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:54:57 crc kubenswrapper[4896]: I0126 15:54:57.982088 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:54:58 crc kubenswrapper[4896]: I0126 15:54:58.512320 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8"] Jan 26 15:54:59 crc kubenswrapper[4896]: I0126 15:54:59.062419 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" event={"ID":"b54a446e-c064-4867-91fa-55f96ea9d87e","Type":"ContainerStarted","Data":"38f508f0d65cbe78854f1f91b5b8d8d84a88067ebaad23a4d0eead1dc61dd9ec"} Jan 26 15:55:07 crc kubenswrapper[4896]: I0126 15:55:07.526933 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" event={"ID":"b54a446e-c064-4867-91fa-55f96ea9d87e","Type":"ContainerStarted","Data":"bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e"} Jan 26 15:55:07 crc kubenswrapper[4896]: I0126 15:55:07.529066 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:55:17 crc kubenswrapper[4896]: I0126 15:55:17.984798 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.021889 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" podStartSLOduration=12.87337824 podStartE2EDuration="21.021868127s" podCreationTimestamp="2026-01-26 15:54:57 +0000 UTC" firstStartedPulling="2026-01-26 15:54:58.525371182 +0000 UTC m=+1256.307251575" lastFinishedPulling="2026-01-26 15:55:06.673861069 +0000 UTC m=+1264.455741462" observedRunningTime="2026-01-26 15:55:07.571962941 +0000 UTC m=+1265.353843414" watchObservedRunningTime="2026-01-26 15:55:18.021868127 +0000 UTC m=+1275.803748520" Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.813908 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.814219 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.814323 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.815098 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:55:18 crc kubenswrapper[4896]: I0126 15:55:18.815216 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529" gracePeriod=600 Jan 26 15:55:19 crc kubenswrapper[4896]: I0126 15:55:19.636420 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529" exitCode=0 Jan 26 15:55:19 crc kubenswrapper[4896]: I0126 15:55:19.636491 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529"} Jan 26 15:55:19 crc kubenswrapper[4896]: I0126 15:55:19.636837 4896 scope.go:117] "RemoveContainer" containerID="bc80bae02fc3586e032a25fdf0a87292a0c0b3c2653785eb94241ee6654c386f" Jan 26 15:55:20 crc kubenswrapper[4896]: I0126 15:55:20.646858 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b"} Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.105621 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.107298 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.109336 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2dgq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.122332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2856\" (UniqueName: \"kubernetes.io/projected/8c799412-6936-4161-8d4e-244bc94c0d69-kube-api-access-h2856\") pod \"cinder-operator-controller-manager-7478f7dbf9-7t46g\" (UID: \"8c799412-6936-4161-8d4e-244bc94c0d69\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.142596 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.144075 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.149233 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-gml22" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.149958 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.157981 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.176477 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.178428 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.181065 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6zp9j" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.211705 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.223878 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2856\" (UniqueName: \"kubernetes.io/projected/8c799412-6936-4161-8d4e-244bc94c0d69-kube-api-access-h2856\") pod \"cinder-operator-controller-manager-7478f7dbf9-7t46g\" (UID: \"8c799412-6936-4161-8d4e-244bc94c0d69\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.223938 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-228l4\" (UniqueName: \"kubernetes.io/projected/16c521f5-6f5f-43e3-a670-9f6ab6312d9c-kube-api-access-228l4\") pod \"designate-operator-controller-manager-b45d7bf98-6wv5s\" (UID: \"16c521f5-6f5f-43e3-a670-9f6ab6312d9c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.223981 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6jc\" (UniqueName: \"kubernetes.io/projected/c44d6ef8-c52f-4a31-8a33-1ee01d7e969a-kube-api-access-wm6jc\") pod \"barbican-operator-controller-manager-7f86f8796f-rp5b4\" (UID: \"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.225852 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.227084 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.229957 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-84bdq" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.234613 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.235926 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.238324 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-494wd" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.255224 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.256181 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.257168 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.257760 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.259109 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.260294 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9njnd" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.260430 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dzrwm" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.266393 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2856\" (UniqueName: \"kubernetes.io/projected/8c799412-6936-4161-8d4e-244bc94c0d69-kube-api-access-h2856\") pod \"cinder-operator-controller-manager-7478f7dbf9-7t46g\" (UID: \"8c799412-6936-4161-8d4e-244bc94c0d69\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.342998 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-228l4\" (UniqueName: \"kubernetes.io/projected/16c521f5-6f5f-43e3-a670-9f6ab6312d9c-kube-api-access-228l4\") pod \"designate-operator-controller-manager-b45d7bf98-6wv5s\" (UID: \"16c521f5-6f5f-43e3-a670-9f6ab6312d9c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.343257 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6jc\" (UniqueName: \"kubernetes.io/projected/c44d6ef8-c52f-4a31-8a33-1ee01d7e969a-kube-api-access-wm6jc\") pod \"barbican-operator-controller-manager-7f86f8796f-rp5b4\" (UID: \"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.367217 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-228l4\" (UniqueName: \"kubernetes.io/projected/16c521f5-6f5f-43e3-a670-9f6ab6312d9c-kube-api-access-228l4\") pod \"designate-operator-controller-manager-b45d7bf98-6wv5s\" (UID: \"16c521f5-6f5f-43e3-a670-9f6ab6312d9c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.378553 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6jc\" (UniqueName: \"kubernetes.io/projected/c44d6ef8-c52f-4a31-8a33-1ee01d7e969a-kube-api-access-wm6jc\") pod \"barbican-operator-controller-manager-7f86f8796f-rp5b4\" (UID: \"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.379520 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.405407 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.426562 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.426683 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.444853 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-948px"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.447408 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.455948 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-x8rn5" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456179 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5vgp\" (UniqueName: \"kubernetes.io/projected/b0480b36-40e2-426c-a1a8-e02e79fe7a17-kube-api-access-p5vgp\") pod \"horizon-operator-controller-manager-77d5c5b54f-jx95g\" (UID: \"b0480b36-40e2-426c-a1a8-e02e79fe7a17\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456600 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgd9\" (UniqueName: \"kubernetes.io/projected/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-kube-api-access-dpgd9\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456660 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrmk\" (UniqueName: \"kubernetes.io/projected/f6fe08af-0b15-4be3-8473-6a983d21ebe3-kube-api-access-kcrmk\") pod \"glance-operator-controller-manager-78fdd796fd-j92tx\" (UID: \"f6fe08af-0b15-4be3-8473-6a983d21ebe3\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456762 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.456829 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2fz\" (UniqueName: \"kubernetes.io/projected/b3272d78-4dde-4997-9316-24a84c00f4c8-kube-api-access-wt2fz\") pod \"heat-operator-controller-manager-594c8c9d5d-z7j4w\" (UID: \"b3272d78-4dde-4997-9316-24a84c00f4c8\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.457612 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.465182 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.477848 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.494151 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-948px"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.494282 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.499371 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j94xh" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.513771 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.520338 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.521807 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.524757 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ztwrc" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.557218 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.558384 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559230 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559293 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9r7p\" (UniqueName: \"kubernetes.io/projected/a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d-kube-api-access-c9r7p\") pod \"manila-operator-controller-manager-78c6999f6f-cwcgv\" (UID: \"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559334 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2fz\" (UniqueName: \"kubernetes.io/projected/b3272d78-4dde-4997-9316-24a84c00f4c8-kube-api-access-wt2fz\") pod \"heat-operator-controller-manager-594c8c9d5d-z7j4w\" (UID: \"b3272d78-4dde-4997-9316-24a84c00f4c8\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559373 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5vgp\" (UniqueName: \"kubernetes.io/projected/b0480b36-40e2-426c-a1a8-e02e79fe7a17-kube-api-access-p5vgp\") pod \"horizon-operator-controller-manager-77d5c5b54f-jx95g\" (UID: \"b0480b36-40e2-426c-a1a8-e02e79fe7a17\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559393 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgd9\" (UniqueName: \"kubernetes.io/projected/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-kube-api-access-dpgd9\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559412 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p286\" (UniqueName: \"kubernetes.io/projected/fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4-kube-api-access-4p286\") pod \"keystone-operator-controller-manager-b8b6d4659-lz2hg\" (UID: \"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559432 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv86z\" (UniqueName: \"kubernetes.io/projected/1c532b54-34b3-4b51-bbd3-1e3bd39d5958-kube-api-access-rv86z\") pod \"ironic-operator-controller-manager-598f7747c9-948px\" (UID: \"1c532b54-34b3-4b51-bbd3-1e3bd39d5958\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.559457 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrmk\" (UniqueName: \"kubernetes.io/projected/f6fe08af-0b15-4be3-8473-6a983d21ebe3-kube-api-access-kcrmk\") pod \"glance-operator-controller-manager-78fdd796fd-j92tx\" (UID: \"f6fe08af-0b15-4be3-8473-6a983d21ebe3\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:55:39 crc kubenswrapper[4896]: E0126 15:55:39.560282 4896 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:39 crc kubenswrapper[4896]: E0126 15:55:39.560340 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert podName:03cf04a4-606b-44b9-9aee-86e4b0a8a1eb nodeName:}" failed. No retries permitted until 2026-01-26 15:55:40.060319318 +0000 UTC m=+1297.842199711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert") pod "infra-operator-controller-manager-694cf4f878-49kq4" (UID: "03cf04a4-606b-44b9-9aee-86e4b0a8a1eb") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.562769 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.568878 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xm7lt" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.575506 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.598463 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5vgp\" (UniqueName: \"kubernetes.io/projected/b0480b36-40e2-426c-a1a8-e02e79fe7a17-kube-api-access-p5vgp\") pod \"horizon-operator-controller-manager-77d5c5b54f-jx95g\" (UID: \"b0480b36-40e2-426c-a1a8-e02e79fe7a17\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.599117 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2fz\" (UniqueName: \"kubernetes.io/projected/b3272d78-4dde-4997-9316-24a84c00f4c8-kube-api-access-wt2fz\") pod \"heat-operator-controller-manager-594c8c9d5d-z7j4w\" (UID: \"b3272d78-4dde-4997-9316-24a84c00f4c8\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.620358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgd9\" (UniqueName: \"kubernetes.io/projected/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-kube-api-access-dpgd9\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.623572 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrmk\" (UniqueName: \"kubernetes.io/projected/f6fe08af-0b15-4be3-8473-6a983d21ebe3-kube-api-access-kcrmk\") pod \"glance-operator-controller-manager-78fdd796fd-j92tx\" (UID: \"f6fe08af-0b15-4be3-8473-6a983d21ebe3\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.626647 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.632207 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.632766 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.638396 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2497v" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.640712 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.642669 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.647879 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bmp2z" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.786410 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.813409 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdkv\" (UniqueName: \"kubernetes.io/projected/8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b-kube-api-access-4hdkv\") pod \"nova-operator-controller-manager-7bdb645866-kvnzb\" (UID: \"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.813745 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g97p\" (UniqueName: \"kubernetes.io/projected/8ac5298a-429c-47d6-9436-34bd2bd1fdec-kube-api-access-7g97p\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh\" (UID: \"8ac5298a-429c-47d6-9436-34bd2bd1fdec\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.814253 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9r7p\" (UniqueName: \"kubernetes.io/projected/a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d-kube-api-access-c9r7p\") pod \"manila-operator-controller-manager-78c6999f6f-cwcgv\" (UID: \"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.814640 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p286\" (UniqueName: \"kubernetes.io/projected/fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4-kube-api-access-4p286\") pod \"keystone-operator-controller-manager-b8b6d4659-lz2hg\" (UID: \"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.843370 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv86z\" (UniqueName: \"kubernetes.io/projected/1c532b54-34b3-4b51-bbd3-1e3bd39d5958-kube-api-access-rv86z\") pod \"ironic-operator-controller-manager-598f7747c9-948px\" (UID: \"1c532b54-34b3-4b51-bbd3-1e3bd39d5958\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.843627 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw2t4\" (UniqueName: \"kubernetes.io/projected/3eac11e1-3f7e-467c-b7f7-038d29e23848-kube-api-access-fw2t4\") pod \"neutron-operator-controller-manager-78d58447c5-lvm6z\" (UID: \"3eac11e1-3f7e-467c-b7f7-038d29e23848\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.831509 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr"] Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.917071 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.918873 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.924151 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.927775 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7dxdv" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.953677 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw2t4\" (UniqueName: \"kubernetes.io/projected/3eac11e1-3f7e-467c-b7f7-038d29e23848-kube-api-access-fw2t4\") pod \"neutron-operator-controller-manager-78d58447c5-lvm6z\" (UID: \"3eac11e1-3f7e-467c-b7f7-038d29e23848\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.954058 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdkv\" (UniqueName: \"kubernetes.io/projected/8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b-kube-api-access-4hdkv\") pod \"nova-operator-controller-manager-7bdb645866-kvnzb\" (UID: \"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.954200 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g97p\" (UniqueName: \"kubernetes.io/projected/8ac5298a-429c-47d6-9436-34bd2bd1fdec-kube-api-access-7g97p\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh\" (UID: \"8ac5298a-429c-47d6-9436-34bd2bd1fdec\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:55:39 crc kubenswrapper[4896]: I0126 15:55:39.954118 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.248795 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.267012 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:40 crc kubenswrapper[4896]: E0126 15:55:40.268357 4896 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:40 crc kubenswrapper[4896]: E0126 15:55:40.268443 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert podName:03cf04a4-606b-44b9-9aee-86e4b0a8a1eb nodeName:}" failed. No retries permitted until 2026-01-26 15:55:41.268425177 +0000 UTC m=+1299.050305570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert") pod "infra-operator-controller-manager-694cf4f878-49kq4" (UID: "03cf04a4-606b-44b9-9aee-86e4b0a8a1eb") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.277306 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.279139 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.290473 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.292125 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.296011 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ltj7r" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.299471 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vdtbn" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.299662 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.327943 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.333337 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.336419 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-77pc6" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.354092 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.361712 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.369626 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.370916 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.376138 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mr5\" (UniqueName: \"kubernetes.io/projected/1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3-kube-api-access-r2mr5\") pod \"swift-operator-controller-manager-547cbdb99f-4sl4s\" (UID: \"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.376252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjffd\" (UniqueName: \"kubernetes.io/projected/61be8fa4-3ad2-4745-88ab-850db16c5707-kube-api-access-tjffd\") pod \"octavia-operator-controller-manager-5f4cd88d46-s2bwr\" (UID: \"61be8fa4-3ad2-4745-88ab-850db16c5707\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.378469 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jj5zx" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.380246 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv86z\" (UniqueName: \"kubernetes.io/projected/1c532b54-34b3-4b51-bbd3-1e3bd39d5958-kube-api-access-rv86z\") pod \"ironic-operator-controller-manager-598f7747c9-948px\" (UID: \"1c532b54-34b3-4b51-bbd3-1e3bd39d5958\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.380404 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9r7p\" (UniqueName: \"kubernetes.io/projected/a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d-kube-api-access-c9r7p\") pod \"manila-operator-controller-manager-78c6999f6f-cwcgv\" (UID: \"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.384012 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.385523 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.386061 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p286\" (UniqueName: \"kubernetes.io/projected/fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4-kube-api-access-4p286\") pod \"keystone-operator-controller-manager-b8b6d4659-lz2hg\" (UID: \"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.387033 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdkv\" (UniqueName: \"kubernetes.io/projected/8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b-kube-api-access-4hdkv\") pod \"nova-operator-controller-manager-7bdb645866-kvnzb\" (UID: \"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.389330 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hnhht" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.389682 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g97p\" (UniqueName: \"kubernetes.io/projected/8ac5298a-429c-47d6-9436-34bd2bd1fdec-kube-api-access-7g97p\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh\" (UID: \"8ac5298a-429c-47d6-9436-34bd2bd1fdec\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.398370 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw2t4\" (UniqueName: \"kubernetes.io/projected/3eac11e1-3f7e-467c-b7f7-038d29e23848-kube-api-access-fw2t4\") pod \"neutron-operator-controller-manager-78d58447c5-lvm6z\" (UID: \"3eac11e1-3f7e-467c-b7f7-038d29e23848\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.423136 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.452920 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.458774 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.464321 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p82px"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.465854 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.468821 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qftpn" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.477806 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjffd\" (UniqueName: \"kubernetes.io/projected/61be8fa4-3ad2-4745-88ab-850db16c5707-kube-api-access-tjffd\") pod \"octavia-operator-controller-manager-5f4cd88d46-s2bwr\" (UID: \"61be8fa4-3ad2-4745-88ab-850db16c5707\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.477987 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsdkm\" (UniqueName: \"kubernetes.io/projected/6434b0ee-4d33-4422-a662-3315b2f5499c-kube-api-access-nsdkm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.478034 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs6x\" (UniqueName: \"kubernetes.io/projected/7a813859-31b7-4729-865e-46c6ff663209-kube-api-access-4hs6x\") pod \"placement-operator-controller-manager-79d5ccc684-9vwsl\" (UID: \"7a813859-31b7-4729-865e-46c6ff663209\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.478119 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2mr5\" (UniqueName: \"kubernetes.io/projected/1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3-kube-api-access-r2mr5\") pod \"swift-operator-controller-manager-547cbdb99f-4sl4s\" (UID: \"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.478163 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9fm\" (UniqueName: \"kubernetes.io/projected/29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab-kube-api-access-gn9fm\") pod \"ovn-operator-controller-manager-6f75f45d54-mjzqx\" (UID: \"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.478252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.489704 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p82px"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.523818 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4v8sm"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.525605 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.531398 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4v8sm"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.569116 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bg8x2" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.580926 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsdkm\" (UniqueName: \"kubernetes.io/projected/6434b0ee-4d33-4422-a662-3315b2f5499c-kube-api-access-nsdkm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.580986 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hs6x\" (UniqueName: \"kubernetes.io/projected/7a813859-31b7-4729-865e-46c6ff663209-kube-api-access-4hs6x\") pod \"placement-operator-controller-manager-79d5ccc684-9vwsl\" (UID: \"7a813859-31b7-4729-865e-46c6ff663209\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.581024 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jsz\" (UniqueName: \"kubernetes.io/projected/f2c6d7a1-690c-4364-a2ea-25e955a38782-kube-api-access-92jsz\") pod \"test-operator-controller-manager-69797bbcbd-p82px\" (UID: \"f2c6d7a1-690c-4364-a2ea-25e955a38782\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.581084 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn9fm\" (UniqueName: \"kubernetes.io/projected/29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab-kube-api-access-gn9fm\") pod \"ovn-operator-controller-manager-6f75f45d54-mjzqx\" (UID: \"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.581117 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzs4j\" (UniqueName: \"kubernetes.io/projected/2496a24c-43ae-4ce4-8996-60c6e7282bfa-kube-api-access-jzs4j\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2sttl\" (UID: \"2496a24c-43ae-4ce4-8996-60c6e7282bfa\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.581153 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: E0126 15:55:40.581348 4896 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:40 crc kubenswrapper[4896]: E0126 15:55:40.581406 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert podName:6434b0ee-4d33-4422-a662-3315b2f5499c nodeName:}" failed. No retries permitted until 2026-01-26 15:55:41.081387714 +0000 UTC m=+1298.863268107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" (UID: "6434b0ee-4d33-4422-a662-3315b2f5499c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.594737 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6"] Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.595899 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.601391 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.601656 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.601660 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w2h54" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.602027 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjffd\" (UniqueName: \"kubernetes.io/projected/61be8fa4-3ad2-4745-88ab-850db16c5707-kube-api-access-tjffd\") pod \"octavia-operator-controller-manager-5f4cd88d46-s2bwr\" (UID: \"61be8fa4-3ad2-4745-88ab-850db16c5707\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.620747 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsdkm\" (UniqueName: \"kubernetes.io/projected/6434b0ee-4d33-4422-a662-3315b2f5499c-kube-api-access-nsdkm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.621638 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2mr5\" (UniqueName: \"kubernetes.io/projected/1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3-kube-api-access-r2mr5\") pod \"swift-operator-controller-manager-547cbdb99f-4sl4s\" (UID: \"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.623220 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn9fm\" (UniqueName: \"kubernetes.io/projected/29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab-kube-api-access-gn9fm\") pod \"ovn-operator-controller-manager-6f75f45d54-mjzqx\" (UID: \"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.625054 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hs6x\" (UniqueName: \"kubernetes.io/projected/7a813859-31b7-4729-865e-46c6ff663209-kube-api-access-4hs6x\") pod \"placement-operator-controller-manager-79d5ccc684-9vwsl\" (UID: \"7a813859-31b7-4729-865e-46c6ff663209\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.976601 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4x4\" (UniqueName: \"kubernetes.io/projected/f493b2ea-1515-42db-ac1c-ea1a7121e070-kube-api-access-hj4x4\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.976713 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.977116 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92jsz\" (UniqueName: \"kubernetes.io/projected/f2c6d7a1-690c-4364-a2ea-25e955a38782-kube-api-access-92jsz\") pod \"test-operator-controller-manager-69797bbcbd-p82px\" (UID: \"f2c6d7a1-690c-4364-a2ea-25e955a38782\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.977271 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzs4j\" (UniqueName: \"kubernetes.io/projected/2496a24c-43ae-4ce4-8996-60c6e7282bfa-kube-api-access-jzs4j\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2sttl\" (UID: \"2496a24c-43ae-4ce4-8996-60c6e7282bfa\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.977389 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9css\" (UniqueName: \"kubernetes.io/projected/b8f08a13-e22d-4147-91c2-07c51dbfb83d-kube-api-access-d9css\") pod \"watcher-operator-controller-manager-564965969-4v8sm\" (UID: \"b8f08a13-e22d-4147-91c2-07c51dbfb83d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:55:40 crc kubenswrapper[4896]: I0126 15:55:40.977471 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.016186 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6"] Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.022928 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.029429 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92jsz\" (UniqueName: \"kubernetes.io/projected/f2c6d7a1-690c-4364-a2ea-25e955a38782-kube-api-access-92jsz\") pod \"test-operator-controller-manager-69797bbcbd-p82px\" (UID: \"f2c6d7a1-690c-4364-a2ea-25e955a38782\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.060991 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx"] Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.062902 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.081320 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wnkj7" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.084975 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.085061 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9css\" (UniqueName: \"kubernetes.io/projected/b8f08a13-e22d-4147-91c2-07c51dbfb83d-kube-api-access-d9css\") pod \"watcher-operator-controller-manager-564965969-4v8sm\" (UID: \"b8f08a13-e22d-4147-91c2-07c51dbfb83d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.085156 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.085268 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689s5\" (UniqueName: \"kubernetes.io/projected/bc769396-13b5-4066-b7fc-93a3f87a50ff-kube-api-access-689s5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pgcx\" (UID: \"bc769396-13b5-4066-b7fc-93a3f87a50ff\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.085409 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4x4\" (UniqueName: \"kubernetes.io/projected/f493b2ea-1515-42db-ac1c-ea1a7121e070-kube-api-access-hj4x4\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.085460 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.086446 4896 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.086514 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert podName:6434b0ee-4d33-4422-a662-3315b2f5499c nodeName:}" failed. No retries permitted until 2026-01-26 15:55:42.086493985 +0000 UTC m=+1299.868374378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" (UID: "6434b0ee-4d33-4422-a662-3315b2f5499c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.087283 4896 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.087328 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:41.587316855 +0000 UTC m=+1299.369197248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "metrics-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.090841 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.090939 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:41.590917863 +0000 UTC m=+1299.372798316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.120171 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.163837 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.186020 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.193742 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689s5\" (UniqueName: \"kubernetes.io/projected/bc769396-13b5-4066-b7fc-93a3f87a50ff-kube-api-access-689s5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pgcx\" (UID: \"bc769396-13b5-4066-b7fc-93a3f87a50ff\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.207536 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzs4j\" (UniqueName: \"kubernetes.io/projected/2496a24c-43ae-4ce4-8996-60c6e7282bfa-kube-api-access-jzs4j\") pod \"telemetry-operator-controller-manager-5fd4748d4d-2sttl\" (UID: \"2496a24c-43ae-4ce4-8996-60c6e7282bfa\") " pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.232533 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4x4\" (UniqueName: \"kubernetes.io/projected/f493b2ea-1515-42db-ac1c-ea1a7121e070-kube-api-access-hj4x4\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.279441 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689s5\" (UniqueName: \"kubernetes.io/projected/bc769396-13b5-4066-b7fc-93a3f87a50ff-kube-api-access-689s5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pgcx\" (UID: \"bc769396-13b5-4066-b7fc-93a3f87a50ff\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.315698 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.316063 4896 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.316154 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert podName:03cf04a4-606b-44b9-9aee-86e4b0a8a1eb nodeName:}" failed. No retries permitted until 2026-01-26 15:55:43.316122782 +0000 UTC m=+1301.098003185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert") pod "infra-operator-controller-manager-694cf4f878-49kq4" (UID: "03cf04a4-606b-44b9-9aee-86e4b0a8a1eb") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.339088 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx"] Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.486174 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9css\" (UniqueName: \"kubernetes.io/projected/b8f08a13-e22d-4147-91c2-07c51dbfb83d-kube-api-access-d9css\") pod \"watcher-operator-controller-manager-564965969-4v8sm\" (UID: \"b8f08a13-e22d-4147-91c2-07c51dbfb83d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.616701 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.616844 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.617089 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.617138 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:42.617124148 +0000 UTC m=+1300.399004541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.617640 4896 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: E0126 15:55:41.617702 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:42.617692192 +0000 UTC m=+1300.399572585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "metrics-server-cert" not found Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.879818 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:55:41 crc kubenswrapper[4896]: I0126 15:55:41.902180 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.101082 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.134551 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.154789 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.165364 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.201863 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.202074 4896 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.202135 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert podName:6434b0ee-4d33-4422-a662-3315b2f5499c nodeName:}" failed. No retries permitted until 2026-01-26 15:55:44.202116405 +0000 UTC m=+1301.983996798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" (UID: "6434b0ee-4d33-4422-a662-3315b2f5499c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.207372 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.424087 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.517274 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s"] Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.753324 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.753450 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.753630 4896 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.753691 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:44.75367424 +0000 UTC m=+1302.535554623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "metrics-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.753750 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: E0126 15:55:42.753801 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:44.753785892 +0000 UTC m=+1302.535666285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:42 crc kubenswrapper[4896]: I0126 15:55:42.968224 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:55:43 crc kubenswrapper[4896]: I0126 15:55:43.343816 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" Jan 26 15:55:43 crc kubenswrapper[4896]: I0126 15:55:43.366859 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:43 crc kubenswrapper[4896]: E0126 15:55:43.367037 4896 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:43 crc kubenswrapper[4896]: E0126 15:55:43.367105 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert podName:03cf04a4-606b-44b9-9aee-86e4b0a8a1eb nodeName:}" failed. No retries permitted until 2026-01-26 15:55:47.367087591 +0000 UTC m=+1305.148967984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert") pod "infra-operator-controller-manager-694cf4f878-49kq4" (UID: "03cf04a4-606b-44b9-9aee-86e4b0a8a1eb") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:43 crc kubenswrapper[4896]: I0126 15:55:43.753281 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" event={"ID":"16c521f5-6f5f-43e3-a670-9f6ab6312d9c","Type":"ContainerStarted","Data":"e9a742bdf6e7d96e400de3746bb6e29416792599e92cb8c26d8b5df60e564ab7"} Jan 26 15:55:43 crc kubenswrapper[4896]: I0126 15:55:43.903274 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g"] Jan 26 15:55:44 crc kubenswrapper[4896]: I0126 15:55:44.289858 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.290145 4896 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.290191 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert podName:6434b0ee-4d33-4422-a662-3315b2f5499c nodeName:}" failed. No retries permitted until 2026-01-26 15:55:48.290176379 +0000 UTC m=+1306.072056772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" (UID: "6434b0ee-4d33-4422-a662-3315b2f5499c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:44 crc kubenswrapper[4896]: I0126 15:55:44.383178 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g"] Jan 26 15:55:44 crc kubenswrapper[4896]: I0126 15:55:44.817379 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:44 crc kubenswrapper[4896]: I0126 15:55:44.817490 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.818074 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.818133 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:48.818116696 +0000 UTC m=+1306.599997089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.818407 4896 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:55:44 crc kubenswrapper[4896]: E0126 15:55:44.818489 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:48.818464214 +0000 UTC m=+1306.600344677 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "metrics-server-cert" not found Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.049449 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" event={"ID":"8c799412-6936-4161-8d4e-244bc94c0d69","Type":"ContainerStarted","Data":"1c4ef24912c8579ccb24fdb3cb766a7a4db687f9535b332eb0f9ad959215ab21"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.357128 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4"] Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.403988 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w"] Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.673902 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx"] Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.681171 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh"] Jan 26 15:55:45 crc kubenswrapper[4896]: W0126 15:55:45.682729 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fe08af_0b15_4be3_8473_6a983d21ebe3.slice/crio-b924f49dc6329fc55203de2d03b0641b36d4148ea83f8d16767c60ce9bbaf6ff WatchSource:0}: Error finding container b924f49dc6329fc55203de2d03b0641b36d4148ea83f8d16767c60ce9bbaf6ff: Status 404 returned error can't find the container with id b924f49dc6329fc55203de2d03b0641b36d4148ea83f8d16767c60ce9bbaf6ff Jan 26 15:55:45 crc kubenswrapper[4896]: W0126 15:55:45.683957 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-532ac26a75cd12381e47909cac7c644f4e44ae3db59a716fe0946e396ef6043d WatchSource:0}: Error finding container 532ac26a75cd12381e47909cac7c644f4e44ae3db59a716fe0946e396ef6043d: Status 404 returned error can't find the container with id 532ac26a75cd12381e47909cac7c644f4e44ae3db59a716fe0946e396ef6043d Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.780477 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" event={"ID":"8ac5298a-429c-47d6-9436-34bd2bd1fdec","Type":"ContainerStarted","Data":"532ac26a75cd12381e47909cac7c644f4e44ae3db59a716fe0946e396ef6043d"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.782347 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" event={"ID":"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a","Type":"ContainerStarted","Data":"f164d33fef66bdef9d6133179ffaf13b9843f62132483f1b1c1b856bb1980928"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.784505 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" event={"ID":"b3272d78-4dde-4997-9316-24a84c00f4c8","Type":"ContainerStarted","Data":"54aafd5b354d4bdc8c6a6efc0d128fe610384ee93beaff5fb47f064f13aa6a57"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.787343 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" event={"ID":"f6fe08af-0b15-4be3-8473-6a983d21ebe3","Type":"ContainerStarted","Data":"b924f49dc6329fc55203de2d03b0641b36d4148ea83f8d16767c60ce9bbaf6ff"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.791953 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" event={"ID":"b0480b36-40e2-426c-a1a8-e02e79fe7a17","Type":"ContainerStarted","Data":"18692d4cd03f05e31f0010017c2ce506ed25ca53dbad8715690321f1ba7dd150"} Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.852026 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-948px"] Jan 26 15:55:45 crc kubenswrapper[4896]: W0126 15:55:45.864143 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-e14de8d53873d690d81f06ae41dcecaa8d8079beeada158153e92252713160a2 WatchSource:0}: Error finding container e14de8d53873d690d81f06ae41dcecaa8d8079beeada158153e92252713160a2: Status 404 returned error can't find the container with id e14de8d53873d690d81f06ae41dcecaa8d8079beeada158153e92252713160a2 Jan 26 15:55:45 crc kubenswrapper[4896]: W0126 15:55:45.873233 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-8ee82512e9bab7f1f00ba2da6cdcddd6a13c3a6df389c98dfea4cb77e991f17c WatchSource:0}: Error finding container 8ee82512e9bab7f1f00ba2da6cdcddd6a13c3a6df389c98dfea4cb77e991f17c: Status 404 returned error can't find the container with id 8ee82512e9bab7f1f00ba2da6cdcddd6a13c3a6df389c98dfea4cb77e991f17c Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.875198 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv"] Jan 26 15:55:45 crc kubenswrapper[4896]: W0126 15:55:45.876310 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-cc7bfa5a06c6fe1b59ba7c90e6a937c8846aa7c18a77aeacbf543f1ebb6e6378 WatchSource:0}: Error finding container cc7bfa5a06c6fe1b59ba7c90e6a937c8846aa7c18a77aeacbf543f1ebb6e6378: Status 404 returned error can't find the container with id cc7bfa5a06c6fe1b59ba7c90e6a937c8846aa7c18a77aeacbf543f1ebb6e6378 Jan 26 15:55:45 crc kubenswrapper[4896]: I0126 15:55:45.886482 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.316693 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.331761 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.352507 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.371413 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p82px"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.398391 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.406444 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.415145 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.422167 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.430632 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx"] Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.438627 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4v8sm"] Jan 26 15:55:46 crc kubenswrapper[4896]: W0126 15:55:46.488315 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-1f63ecb14dafbb9d6ee843907e5fb348e77cbd01fffceeaa86d6438b93c9335b WatchSource:0}: Error finding container 1f63ecb14dafbb9d6ee843907e5fb348e77cbd01fffceeaa86d6438b93c9335b: Status 404 returned error can't find the container with id 1f63ecb14dafbb9d6ee843907e5fb348e77cbd01fffceeaa86d6438b93c9335b Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.515441 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-689s5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7pgcx_openstack-operators(bc769396-13b5-4066-b7fc-93a3f87a50ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.515770 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9css,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-4v8sm_openstack-operators(b8f08a13-e22d-4147-91c2-07c51dbfb83d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.516749 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podUID="bc769396-13b5-4066-b7fc-93a3f87a50ff" Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.517397 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.804942 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" event={"ID":"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d","Type":"ContainerStarted","Data":"8ee82512e9bab7f1f00ba2da6cdcddd6a13c3a6df389c98dfea4cb77e991f17c"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.812923 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" event={"ID":"bc769396-13b5-4066-b7fc-93a3f87a50ff","Type":"ContainerStarted","Data":"e24d6ba9023c32d35240b05492b10b2c26622b968abc531af19d64b6bffd9f98"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.814998 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" event={"ID":"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b","Type":"ContainerStarted","Data":"e72df9c739ea863360d8d45496e9e31772459412253ee239ef801dd4371b3507"} Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.820903 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podUID="bc769396-13b5-4066-b7fc-93a3f87a50ff" Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.876307 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" event={"ID":"b8f08a13-e22d-4147-91c2-07c51dbfb83d","Type":"ContainerStarted","Data":"f5c2f40456105235f0439e113b5b5b339bb091a8b84127edc9fd0024a2294ef4"} Jan 26 15:55:46 crc kubenswrapper[4896]: E0126 15:55:46.921890 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.947504 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" event={"ID":"1c532b54-34b3-4b51-bbd3-1e3bd39d5958","Type":"ContainerStarted","Data":"e14de8d53873d690d81f06ae41dcecaa8d8079beeada158153e92252713160a2"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.968374 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" event={"ID":"2496a24c-43ae-4ce4-8996-60c6e7282bfa","Type":"ContainerStarted","Data":"dc02e1888155065d40d35e16a0fd3ff403ce6268744ac8f0ea823a30f3f65122"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.979527 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" event={"ID":"3eac11e1-3f7e-467c-b7f7-038d29e23848","Type":"ContainerStarted","Data":"ab5beb90da1842a909c9701d4f2680e84b1f9db4b62a5bfdc54e8b7b7434d00f"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.995832 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" event={"ID":"61be8fa4-3ad2-4745-88ab-850db16c5707","Type":"ContainerStarted","Data":"1f63ecb14dafbb9d6ee843907e5fb348e77cbd01fffceeaa86d6438b93c9335b"} Jan 26 15:55:46 crc kubenswrapper[4896]: I0126 15:55:46.998061 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" event={"ID":"7a813859-31b7-4729-865e-46c6ff663209","Type":"ContainerStarted","Data":"b03bc68d92e8cf3fe1fcf42e4eaa93868a7c96ceccd3ea783121e92b96b6399b"} Jan 26 15:55:47 crc kubenswrapper[4896]: I0126 15:55:47.008783 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" event={"ID":"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab","Type":"ContainerStarted","Data":"537439ae0e39e9da8efe588ea5cd90e13bd24283612f6de5330c3fea5a86d9a2"} Jan 26 15:55:47 crc kubenswrapper[4896]: I0126 15:55:47.013623 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" event={"ID":"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4","Type":"ContainerStarted","Data":"cc7bfa5a06c6fe1b59ba7c90e6a937c8846aa7c18a77aeacbf543f1ebb6e6378"} Jan 26 15:55:47 crc kubenswrapper[4896]: I0126 15:55:47.015024 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" event={"ID":"f2c6d7a1-690c-4364-a2ea-25e955a38782","Type":"ContainerStarted","Data":"daf0e15aa3fa9c13563c13dadc4201e9b87527eb3b6da2d41b4dfab0fb2d685b"} Jan 26 15:55:47 crc kubenswrapper[4896]: I0126 15:55:47.019466 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" event={"ID":"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3","Type":"ContainerStarted","Data":"8748e24e278bb6c4f9c73c20404d61b6f1e4362a7fed825941fdbe61f3bbcae6"} Jan 26 15:55:47 crc kubenswrapper[4896]: I0126 15:55:47.411011 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:47 crc kubenswrapper[4896]: E0126 15:55:47.411292 4896 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:47 crc kubenswrapper[4896]: E0126 15:55:47.411372 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert podName:03cf04a4-606b-44b9-9aee-86e4b0a8a1eb nodeName:}" failed. No retries permitted until 2026-01-26 15:55:55.41134345 +0000 UTC m=+1313.193223853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert") pod "infra-operator-controller-manager-694cf4f878-49kq4" (UID: "03cf04a4-606b-44b9-9aee-86e4b0a8a1eb") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.058510 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podUID="bc769396-13b5-4066-b7fc-93a3f87a50ff" Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.058515 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" Jan 26 15:55:48 crc kubenswrapper[4896]: I0126 15:55:48.291206 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.291540 4896 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.291703 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert podName:6434b0ee-4d33-4422-a662-3315b2f5499c nodeName:}" failed. No retries permitted until 2026-01-26 15:55:56.291645746 +0000 UTC m=+1314.073526139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" (UID: "6434b0ee-4d33-4422-a662-3315b2f5499c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: I0126 15:55:48.858706 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:48 crc kubenswrapper[4896]: I0126 15:55:48.858815 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.859046 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.859113 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:56.859095936 +0000 UTC m=+1314.640976329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.859682 4896 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:55:48 crc kubenswrapper[4896]: E0126 15:55:48.859726 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:55:56.859718312 +0000 UTC m=+1314.641598705 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "metrics-server-cert" not found Jan 26 15:55:55 crc kubenswrapper[4896]: I0126 15:55:55.450228 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:55 crc kubenswrapper[4896]: I0126 15:55:55.459596 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03cf04a4-606b-44b9-9aee-86e4b0a8a1eb-cert\") pod \"infra-operator-controller-manager-694cf4f878-49kq4\" (UID: \"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:55 crc kubenswrapper[4896]: I0126 15:55:55.527257 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.305185 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.310617 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6434b0ee-4d33-4422-a662-3315b2f5499c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd\" (UID: \"6434b0ee-4d33-4422-a662-3315b2f5499c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.331671 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.920561 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:56 crc kubenswrapper[4896]: E0126 15:55:56.920843 4896 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.921079 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:55:56 crc kubenswrapper[4896]: E0126 15:55:56.921171 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs podName:f493b2ea-1515-42db-ac1c-ea1a7121e070 nodeName:}" failed. No retries permitted until 2026-01-26 15:56:12.921143353 +0000 UTC m=+1330.703023746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs") pod "openstack-operator-controller-manager-7d6b58b596-csnd6" (UID: "f493b2ea-1515-42db-ac1c-ea1a7121e070") : secret "webhook-server-cert" not found Jan 26 15:55:56 crc kubenswrapper[4896]: I0126 15:55:56.926197 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-metrics-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:00 crc kubenswrapper[4896]: E0126 15:56:00.593288 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 15:56:00 crc kubenswrapper[4896]: E0126 15:56:00.594088 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5vgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-jx95g_openstack-operators(b0480b36-40e2-426c-a1a8-e02e79fe7a17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:00 crc kubenswrapper[4896]: E0126 15:56:00.595346 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" podUID="b0480b36-40e2-426c-a1a8-e02e79fe7a17" Jan 26 15:56:01 crc kubenswrapper[4896]: E0126 15:56:01.519973 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" podUID="b0480b36-40e2-426c-a1a8-e02e79fe7a17" Jan 26 15:56:02 crc kubenswrapper[4896]: E0126 15:56:02.258060 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 15:56:02 crc kubenswrapper[4896]: E0126 15:56:02.258659 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g97p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh_openstack-operators(8ac5298a-429c-47d6-9436-34bd2bd1fdec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:02 crc kubenswrapper[4896]: E0126 15:56:02.259868 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" podUID="8ac5298a-429c-47d6-9436-34bd2bd1fdec" Jan 26 15:56:02 crc kubenswrapper[4896]: E0126 15:56:02.519866 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" podUID="8ac5298a-429c-47d6-9436-34bd2bd1fdec" Jan 26 15:56:07 crc kubenswrapper[4896]: E0126 15:56:07.574203 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 15:56:07 crc kubenswrapper[4896]: E0126 15:56:07.575165 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4p286,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-lz2hg_openstack-operators(fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:07 crc kubenswrapper[4896]: E0126 15:56:07.576407 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" podUID="fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4" Jan 26 15:56:08 crc kubenswrapper[4896]: E0126 15:56:08.122538 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" podUID="fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4" Jan 26 15:56:08 crc kubenswrapper[4896]: E0126 15:56:08.720950 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 15:56:08 crc kubenswrapper[4896]: E0126 15:56:08.721910 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wm6jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-rp5b4_openstack-operators(c44d6ef8-c52f-4a31-8a33-1ee01d7e969a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:08 crc kubenswrapper[4896]: E0126 15:56:08.723088 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" podUID="c44d6ef8-c52f-4a31-8a33-1ee01d7e969a" Jan 26 15:56:09 crc kubenswrapper[4896]: E0126 15:56:09.133763 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" podUID="c44d6ef8-c52f-4a31-8a33-1ee01d7e969a" Jan 26 15:56:09 crc kubenswrapper[4896]: E0126 15:56:09.528201 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 15:56:09 crc kubenswrapper[4896]: E0126 15:56:09.528475 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kcrmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-j92tx_openstack-operators(f6fe08af-0b15-4be3-8473-6a983d21ebe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:09 crc kubenswrapper[4896]: E0126 15:56:09.529839 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" podUID="f6fe08af-0b15-4be3-8473-6a983d21ebe3" Jan 26 15:56:10 crc kubenswrapper[4896]: E0126 15:56:10.144249 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" podUID="f6fe08af-0b15-4be3-8473-6a983d21ebe3" Jan 26 15:56:10 crc kubenswrapper[4896]: E0126 15:56:10.727710 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 15:56:10 crc kubenswrapper[4896]: E0126 15:56:10.727897 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wt2fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-z7j4w_openstack-operators(b3272d78-4dde-4997-9316-24a84c00f4c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:10 crc kubenswrapper[4896]: E0126 15:56:10.729330 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podUID="b3272d78-4dde-4997-9316-24a84c00f4c8" Jan 26 15:56:11 crc kubenswrapper[4896]: E0126 15:56:11.155044 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podUID="b3272d78-4dde-4997-9316-24a84c00f4c8" Jan 26 15:56:11 crc kubenswrapper[4896]: E0126 15:56:11.620055 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 15:56:11 crc kubenswrapper[4896]: E0126 15:56:11.620426 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv86z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-948px_openstack-operators(1c532b54-34b3-4b51-bbd3-1e3bd39d5958): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:11 crc kubenswrapper[4896]: E0126 15:56:11.621629 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" podUID="1c532b54-34b3-4b51-bbd3-1e3bd39d5958" Jan 26 15:56:12 crc kubenswrapper[4896]: E0126 15:56:12.169688 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" podUID="1c532b54-34b3-4b51-bbd3-1e3bd39d5958" Jan 26 15:56:12 crc kubenswrapper[4896]: I0126 15:56:12.939558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:12 crc kubenswrapper[4896]: I0126 15:56:12.952519 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f493b2ea-1515-42db-ac1c-ea1a7121e070-webhook-certs\") pod \"openstack-operator-controller-manager-7d6b58b596-csnd6\" (UID: \"f493b2ea-1515-42db-ac1c-ea1a7121e070\") " pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:13 crc kubenswrapper[4896]: I0126 15:56:13.130111 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w2h54" Jan 26 15:56:13 crc kubenswrapper[4896]: I0126 15:56:13.137813 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:22 crc kubenswrapper[4896]: E0126 15:56:22.115944 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 15:56:22 crc kubenswrapper[4896]: E0126 15:56:22.117447 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9r7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-cwcgv_openstack-operators(a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:22 crc kubenswrapper[4896]: E0126 15:56:22.118768 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" podUID="a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d" Jan 26 15:56:22 crc kubenswrapper[4896]: E0126 15:56:22.253639 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" podUID="a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d" Jan 26 15:56:23 crc kubenswrapper[4896]: E0126 15:56:23.619609 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 26 15:56:23 crc kubenswrapper[4896]: E0126 15:56:23.620531 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92jsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-p82px_openstack-operators(f2c6d7a1-690c-4364-a2ea-25e955a38782): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:23 crc kubenswrapper[4896]: E0126 15:56:23.622725 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.155641 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.155874 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw2t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-lvm6z_openstack-operators(3eac11e1-3f7e-467c-b7f7-038d29e23848): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.157100 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.270947 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.273153 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.866283 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.866434 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gn9fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-mjzqx_openstack-operators(29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:24 crc kubenswrapper[4896]: E0126 15:56:24.867849 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" Jan 26 15:56:25 crc kubenswrapper[4896]: E0126 15:56:25.283724 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" Jan 26 15:56:25 crc kubenswrapper[4896]: E0126 15:56:25.858787 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 15:56:25 crc kubenswrapper[4896]: E0126 15:56:25.859022 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjffd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-s2bwr_openstack-operators(61be8fa4-3ad2-4745-88ab-850db16c5707): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:25 crc kubenswrapper[4896]: E0126 15:56:25.860338 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" Jan 26 15:56:26 crc kubenswrapper[4896]: E0126 15:56:26.287655 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" Jan 26 15:56:26 crc kubenswrapper[4896]: E0126 15:56:26.453975 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 15:56:26 crc kubenswrapper[4896]: E0126 15:56:26.454181 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2mr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4sl4s_openstack-operators(1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:26 crc kubenswrapper[4896]: E0126 15:56:26.455504 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" Jan 26 15:56:27 crc kubenswrapper[4896]: E0126 15:56:27.023326 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 15:56:27 crc kubenswrapper[4896]: E0126 15:56:27.023600 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hs6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-9vwsl_openstack-operators(7a813859-31b7-4729-865e-46c6ff663209): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:27 crc kubenswrapper[4896]: E0126 15:56:27.024891 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" Jan 26 15:56:27 crc kubenswrapper[4896]: E0126 15:56:27.296294 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" Jan 26 15:56:27 crc kubenswrapper[4896]: E0126 15:56:27.296291 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" Jan 26 15:56:32 crc kubenswrapper[4896]: E0126 15:56:32.927669 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 15:56:32 crc kubenswrapper[4896]: E0126 15:56:32.928407 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hdkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-kvnzb_openstack-operators(8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:32 crc kubenswrapper[4896]: E0126 15:56:32.929736 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.344126 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.465770 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.466010 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9css,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-4v8sm_openstack-operators(b8f08a13-e22d-4147-91c2-07c51dbfb83d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.467414 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.558979 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.559113 4896 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.559481 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzs4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fd4748d4d-2sttl_openstack-operators(2496a24c-43ae-4ce4-8996-60c6e7282bfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:33 crc kubenswrapper[4896]: E0126 15:56:33.560764 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" Jan 26 15:56:34 crc kubenswrapper[4896]: E0126 15:56:34.079240 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 15:56:34 crc kubenswrapper[4896]: E0126 15:56:34.079439 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-689s5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7pgcx_openstack-operators(bc769396-13b5-4066-b7fc-93a3f87a50ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:56:34 crc kubenswrapper[4896]: E0126 15:56:34.081772 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podUID="bc769396-13b5-4066-b7fc-93a3f87a50ff" Jan 26 15:56:34 crc kubenswrapper[4896]: E0126 15:56:34.386877 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.47:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" Jan 26 15:56:34 crc kubenswrapper[4896]: I0126 15:56:34.749298 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4"] Jan 26 15:56:34 crc kubenswrapper[4896]: W0126 15:56:34.762428 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf493b2ea_1515_42db_ac1c_ea1a7121e070.slice/crio-bd8fe84aa8724da5bb813c56b21b569b30c6a57ef552e7be208715e13734aab1 WatchSource:0}: Error finding container bd8fe84aa8724da5bb813c56b21b569b30c6a57ef552e7be208715e13734aab1: Status 404 returned error can't find the container with id bd8fe84aa8724da5bb813c56b21b569b30c6a57ef552e7be208715e13734aab1 Jan 26 15:56:34 crc kubenswrapper[4896]: I0126 15:56:34.786778 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6"] Jan 26 15:56:34 crc kubenswrapper[4896]: I0126 15:56:34.793539 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd"] Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.384947 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" event={"ID":"b3272d78-4dde-4997-9316-24a84c00f4c8","Type":"ContainerStarted","Data":"dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.385268 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.388524 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" event={"ID":"f6fe08af-0b15-4be3-8473-6a983d21ebe3","Type":"ContainerStarted","Data":"1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.388752 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.389999 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" event={"ID":"1c532b54-34b3-4b51-bbd3-1e3bd39d5958","Type":"ContainerStarted","Data":"a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.390983 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.395366 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" event={"ID":"b0480b36-40e2-426c-a1a8-e02e79fe7a17","Type":"ContainerStarted","Data":"de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.395752 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.400810 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" event={"ID":"f493b2ea-1515-42db-ac1c-ea1a7121e070","Type":"ContainerStarted","Data":"bd8fe84aa8724da5bb813c56b21b569b30c6a57ef552e7be208715e13734aab1"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.403005 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" event={"ID":"8ac5298a-429c-47d6-9436-34bd2bd1fdec","Type":"ContainerStarted","Data":"aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.404027 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.406662 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" event={"ID":"8c799412-6936-4161-8d4e-244bc94c0d69","Type":"ContainerStarted","Data":"bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.406857 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.415011 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" event={"ID":"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4","Type":"ContainerStarted","Data":"4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.415238 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.417529 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" event={"ID":"6434b0ee-4d33-4422-a662-3315b2f5499c","Type":"ContainerStarted","Data":"c500c375875e263358e4055ab0b1f7355569d67d5cbdcfc1092ce436acf18820"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.421038 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" event={"ID":"16c521f5-6f5f-43e3-a670-9f6ab6312d9c","Type":"ContainerStarted","Data":"4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.421754 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.423145 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" event={"ID":"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb","Type":"ContainerStarted","Data":"11f06753dd67215455bee9d242b0e0fa19bb9a8882a02b1b0438cb7186963c81"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.428130 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" event={"ID":"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a","Type":"ContainerStarted","Data":"b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6"} Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.429175 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.465649 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podStartSLOduration=7.735786796 podStartE2EDuration="56.465624857s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.424010434 +0000 UTC m=+1303.205890827" lastFinishedPulling="2026-01-26 15:56:34.153848495 +0000 UTC m=+1351.935728888" observedRunningTime="2026-01-26 15:56:35.460954923 +0000 UTC m=+1353.242835316" watchObservedRunningTime="2026-01-26 15:56:35.465624857 +0000 UTC m=+1353.247505250" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.736965 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" podStartSLOduration=8.315254078 podStartE2EDuration="56.7369461s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.68646246 +0000 UTC m=+1303.468342853" lastFinishedPulling="2026-01-26 15:56:34.108154482 +0000 UTC m=+1351.890034875" observedRunningTime="2026-01-26 15:56:35.731255761 +0000 UTC m=+1353.513136164" watchObservedRunningTime="2026-01-26 15:56:35.7369461 +0000 UTC m=+1353.518826493" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.812370 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" podStartSLOduration=8.550929643 podStartE2EDuration="56.812343138s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.867770129 +0000 UTC m=+1303.649650522" lastFinishedPulling="2026-01-26 15:56:34.129183624 +0000 UTC m=+1351.911064017" observedRunningTime="2026-01-26 15:56:35.803378619 +0000 UTC m=+1353.585259012" watchObservedRunningTime="2026-01-26 15:56:35.812343138 +0000 UTC m=+1353.594223531" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.959828 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" podStartSLOduration=8.511648506 podStartE2EDuration="56.959811372s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.686168863 +0000 UTC m=+1303.468049256" lastFinishedPulling="2026-01-26 15:56:34.134331739 +0000 UTC m=+1351.916212122" observedRunningTime="2026-01-26 15:56:35.917829738 +0000 UTC m=+1353.699710151" watchObservedRunningTime="2026-01-26 15:56:35.959811372 +0000 UTC m=+1353.741691765" Jan 26 15:56:35 crc kubenswrapper[4896]: I0126 15:56:35.961367 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" podStartSLOduration=8.733117432 podStartE2EDuration="56.961357069s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.878620473 +0000 UTC m=+1303.660500866" lastFinishedPulling="2026-01-26 15:56:34.10686011 +0000 UTC m=+1351.888740503" observedRunningTime="2026-01-26 15:56:35.959412202 +0000 UTC m=+1353.741292595" watchObservedRunningTime="2026-01-26 15:56:35.961357069 +0000 UTC m=+1353.743237462" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.027264 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" podStartSLOduration=15.405257542 podStartE2EDuration="57.027249735s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:42.542898492 +0000 UTC m=+1300.324778885" lastFinishedPulling="2026-01-26 15:56:24.164890685 +0000 UTC m=+1341.946771078" observedRunningTime="2026-01-26 15:56:35.991757871 +0000 UTC m=+1353.773638264" watchObservedRunningTime="2026-01-26 15:56:36.027249735 +0000 UTC m=+1353.809130128" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.100655 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" podStartSLOduration=8.417195933 podStartE2EDuration="57.100629094s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.424757791 +0000 UTC m=+1303.206638184" lastFinishedPulling="2026-01-26 15:56:34.108190952 +0000 UTC m=+1351.890071345" observedRunningTime="2026-01-26 15:56:36.098534323 +0000 UTC m=+1353.880414716" watchObservedRunningTime="2026-01-26 15:56:36.100629094 +0000 UTC m=+1353.882509487" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.161161 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" podStartSLOduration=8.206855997 podStartE2EDuration="57.161098178s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.154810252 +0000 UTC m=+1302.936690645" lastFinishedPulling="2026-01-26 15:56:34.109052433 +0000 UTC m=+1351.890932826" observedRunningTime="2026-01-26 15:56:36.034182634 +0000 UTC m=+1353.816063027" watchObservedRunningTime="2026-01-26 15:56:36.161098178 +0000 UTC m=+1353.942978571" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.191051 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" podStartSLOduration=16.537433287 podStartE2EDuration="57.191031307s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:44.207441872 +0000 UTC m=+1301.989322265" lastFinishedPulling="2026-01-26 15:56:24.861039892 +0000 UTC m=+1342.642920285" observedRunningTime="2026-01-26 15:56:36.137985814 +0000 UTC m=+1353.919866207" watchObservedRunningTime="2026-01-26 15:56:36.191031307 +0000 UTC m=+1353.972911700" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.636866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" event={"ID":"f493b2ea-1515-42db-ac1c-ea1a7121e070","Type":"ContainerStarted","Data":"111634478bf1516e33a6ae93ff14a4c5bf7f6cdffd17c3a3d2b9361aa72e738c"} Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.643538 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:36 crc kubenswrapper[4896]: I0126 15:56:36.735830 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" podStartSLOduration=56.735807085 podStartE2EDuration="56.735807085s" podCreationTimestamp="2026-01-26 15:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:56:36.726239251 +0000 UTC m=+1354.508119664" watchObservedRunningTime="2026-01-26 15:56:36.735807085 +0000 UTC m=+1354.517687478" Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.667930 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" event={"ID":"3eac11e1-3f7e-467c-b7f7-038d29e23848","Type":"ContainerStarted","Data":"5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4"} Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.668680 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.689319 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" event={"ID":"f2c6d7a1-690c-4364-a2ea-25e955a38782","Type":"ContainerStarted","Data":"63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e"} Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.689619 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.707538 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podStartSLOduration=8.504429138999999 podStartE2EDuration="58.707516788s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.482705177 +0000 UTC m=+1304.264585570" lastFinishedPulling="2026-01-26 15:56:36.685792826 +0000 UTC m=+1354.467673219" observedRunningTime="2026-01-26 15:56:37.701697186 +0000 UTC m=+1355.483577599" watchObservedRunningTime="2026-01-26 15:56:37.707516788 +0000 UTC m=+1355.489397181" Jan 26 15:56:37 crc kubenswrapper[4896]: I0126 15:56:37.825131 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podStartSLOduration=8.636084298 podStartE2EDuration="58.825104444s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.504941819 +0000 UTC m=+1304.286822212" lastFinishedPulling="2026-01-26 15:56:36.693961965 +0000 UTC m=+1354.475842358" observedRunningTime="2026-01-26 15:56:37.81300297 +0000 UTC m=+1355.594883363" watchObservedRunningTime="2026-01-26 15:56:37.825104444 +0000 UTC m=+1355.606984827" Jan 26 15:56:38 crc kubenswrapper[4896]: I0126 15:56:38.744807 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" event={"ID":"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d","Type":"ContainerStarted","Data":"fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577"} Jan 26 15:56:38 crc kubenswrapper[4896]: I0126 15:56:38.747332 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.435392 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.463510 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.469799 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.478298 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" podStartSLOduration=8.867408736 podStartE2EDuration="1m0.478267307s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:45.877740762 +0000 UTC m=+1303.659621155" lastFinishedPulling="2026-01-26 15:56:37.488599333 +0000 UTC m=+1355.270479726" observedRunningTime="2026-01-26 15:56:38.782803666 +0000 UTC m=+1356.564684069" watchObservedRunningTime="2026-01-26 15:56:39.478267307 +0000 UTC m=+1357.260147700" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.810911 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.922631 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 15:56:39 crc kubenswrapper[4896]: I0126 15:56:39.928600 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 15:56:41 crc kubenswrapper[4896]: I0126 15:56:41.030420 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 15:56:41 crc kubenswrapper[4896]: I0126 15:56:41.287802 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 15:56:41 crc kubenswrapper[4896]: I0126 15:56:41.288811 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 15:56:42 crc kubenswrapper[4896]: I0126 15:56:42.430175 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 15:56:43 crc kubenswrapper[4896]: I0126 15:56:43.148556 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.418364 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" event={"ID":"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3","Type":"ContainerStarted","Data":"d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e"} Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.419878 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.440703 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" event={"ID":"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb","Type":"ContainerStarted","Data":"5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff"} Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.440814 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.453570 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podStartSLOduration=8.164108334 podStartE2EDuration="1m5.453544578s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.502077289 +0000 UTC m=+1304.283957682" lastFinishedPulling="2026-01-26 15:56:43.791513533 +0000 UTC m=+1361.573393926" observedRunningTime="2026-01-26 15:56:44.444880887 +0000 UTC m=+1362.226761280" watchObservedRunningTime="2026-01-26 15:56:44.453544578 +0000 UTC m=+1362.235424971" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.468027 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" event={"ID":"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab","Type":"ContainerStarted","Data":"02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124"} Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.468323 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.477063 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" event={"ID":"6434b0ee-4d33-4422-a662-3315b2f5499c","Type":"ContainerStarted","Data":"19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0"} Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.477435 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.494541 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" podStartSLOduration=56.493022695 podStartE2EDuration="1m5.494519947s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:56:34.789137269 +0000 UTC m=+1352.571017662" lastFinishedPulling="2026-01-26 15:56:43.790634521 +0000 UTC m=+1361.572514914" observedRunningTime="2026-01-26 15:56:44.47536799 +0000 UTC m=+1362.257248383" watchObservedRunningTime="2026-01-26 15:56:44.494519947 +0000 UTC m=+1362.276400340" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.549176 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podStartSLOduration=8.128288142 podStartE2EDuration="1m5.549153859s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.397735326 +0000 UTC m=+1304.179615719" lastFinishedPulling="2026-01-26 15:56:43.818601043 +0000 UTC m=+1361.600481436" observedRunningTime="2026-01-26 15:56:44.540880627 +0000 UTC m=+1362.322761020" watchObservedRunningTime="2026-01-26 15:56:44.549153859 +0000 UTC m=+1362.331034252" Jan 26 15:56:44 crc kubenswrapper[4896]: I0126 15:56:44.607191 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" podStartSLOduration=56.655759951 podStartE2EDuration="1m5.607167223s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:56:34.835692623 +0000 UTC m=+1352.617573016" lastFinishedPulling="2026-01-26 15:56:43.787099895 +0000 UTC m=+1361.568980288" observedRunningTime="2026-01-26 15:56:44.598922911 +0000 UTC m=+1362.380803314" watchObservedRunningTime="2026-01-26 15:56:44.607167223 +0000 UTC m=+1362.389047616" Jan 26 15:56:44 crc kubenswrapper[4896]: E0126 15:56:44.760807 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" Jan 26 15:56:45 crc kubenswrapper[4896]: I0126 15:56:45.490942 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" event={"ID":"61be8fa4-3ad2-4745-88ab-850db16c5707","Type":"ContainerStarted","Data":"935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276"} Jan 26 15:56:45 crc kubenswrapper[4896]: I0126 15:56:45.491553 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:56:45 crc kubenswrapper[4896]: I0126 15:56:45.492976 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" event={"ID":"7a813859-31b7-4729-865e-46c6ff663209","Type":"ContainerStarted","Data":"a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455"} Jan 26 15:56:45 crc kubenswrapper[4896]: I0126 15:56:45.536312 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podStartSLOduration=9.248090574 podStartE2EDuration="1m6.536287738s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.501223348 +0000 UTC m=+1304.283103741" lastFinishedPulling="2026-01-26 15:56:43.789420512 +0000 UTC m=+1361.571300905" observedRunningTime="2026-01-26 15:56:45.523011404 +0000 UTC m=+1363.304891797" watchObservedRunningTime="2026-01-26 15:56:45.536287738 +0000 UTC m=+1363.318168131" Jan 26 15:56:45 crc kubenswrapper[4896]: I0126 15:56:45.551363 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podStartSLOduration=9.1429059 podStartE2EDuration="1m6.551341935s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.380217039 +0000 UTC m=+1304.162097432" lastFinishedPulling="2026-01-26 15:56:43.788653074 +0000 UTC m=+1361.570533467" observedRunningTime="2026-01-26 15:56:45.543475463 +0000 UTC m=+1363.325355856" watchObservedRunningTime="2026-01-26 15:56:45.551341935 +0000 UTC m=+1363.333222328" Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.512355 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" event={"ID":"2496a24c-43ae-4ce4-8996-60c6e7282bfa","Type":"ContainerStarted","Data":"ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707"} Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.512957 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.514197 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" event={"ID":"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b","Type":"ContainerStarted","Data":"e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7"} Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.514458 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.534219 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podStartSLOduration=8.517785274 podStartE2EDuration="1m8.534202623s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.501222638 +0000 UTC m=+1304.283103031" lastFinishedPulling="2026-01-26 15:56:46.517639987 +0000 UTC m=+1364.299520380" observedRunningTime="2026-01-26 15:56:47.530304338 +0000 UTC m=+1365.312184761" watchObservedRunningTime="2026-01-26 15:56:47.534202623 +0000 UTC m=+1365.316083016" Jan 26 15:56:47 crc kubenswrapper[4896]: I0126 15:56:47.555512 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podStartSLOduration=8.151432726 podStartE2EDuration="1m8.555485722s" podCreationTimestamp="2026-01-26 15:55:39 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.38272482 +0000 UTC m=+1304.164605213" lastFinishedPulling="2026-01-26 15:56:46.786777816 +0000 UTC m=+1364.568658209" observedRunningTime="2026-01-26 15:56:47.548678446 +0000 UTC m=+1365.330558839" watchObservedRunningTime="2026-01-26 15:56:47.555485722 +0000 UTC m=+1365.337366115" Jan 26 15:56:47 crc kubenswrapper[4896]: E0126 15:56:47.762749 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podUID="bc769396-13b5-4066-b7fc-93a3f87a50ff" Jan 26 15:56:51 crc kubenswrapper[4896]: I0126 15:56:51.294884 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 15:56:51 crc kubenswrapper[4896]: I0126 15:56:51.883512 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.106477 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.138060 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.156331 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.158390 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.168842 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 15:56:52 crc kubenswrapper[4896]: I0126 15:56:52.396330 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 15:56:55 crc kubenswrapper[4896]: I0126 15:56:55.536380 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 15:56:56 crc kubenswrapper[4896]: I0126 15:56:56.339033 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 15:56:57 crc kubenswrapper[4896]: I0126 15:56:57.744322 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" event={"ID":"b8f08a13-e22d-4147-91c2-07c51dbfb83d","Type":"ContainerStarted","Data":"9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601"} Jan 26 15:56:57 crc kubenswrapper[4896]: I0126 15:56:57.745312 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:56:57 crc kubenswrapper[4896]: I0126 15:56:57.775220 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podStartSLOduration=7.04165907 podStartE2EDuration="1m17.775197936s" podCreationTimestamp="2026-01-26 15:55:40 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.51565853 +0000 UTC m=+1304.297538923" lastFinishedPulling="2026-01-26 15:56:57.249197396 +0000 UTC m=+1375.031077789" observedRunningTime="2026-01-26 15:56:57.767085668 +0000 UTC m=+1375.548966061" watchObservedRunningTime="2026-01-26 15:56:57.775197936 +0000 UTC m=+1375.557078329" Jan 26 15:57:01 crc kubenswrapper[4896]: I0126 15:57:01.907348 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 15:57:02 crc kubenswrapper[4896]: I0126 15:57:02.971962 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 15:57:04 crc kubenswrapper[4896]: I0126 15:57:04.821353 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" event={"ID":"bc769396-13b5-4066-b7fc-93a3f87a50ff","Type":"ContainerStarted","Data":"159f7fc7a1f1c3ad5e00f288de4260d048d4b849aa2351b9ca11cad0dae92873"} Jan 26 15:57:04 crc kubenswrapper[4896]: I0126 15:57:04.841509 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" podStartSLOduration=7.754284819 podStartE2EDuration="1m24.841490223s" podCreationTimestamp="2026-01-26 15:55:40 +0000 UTC" firstStartedPulling="2026-01-26 15:55:46.515298971 +0000 UTC m=+1304.297179364" lastFinishedPulling="2026-01-26 15:57:03.602504375 +0000 UTC m=+1381.384384768" observedRunningTime="2026-01-26 15:57:04.840645542 +0000 UTC m=+1382.622525945" watchObservedRunningTime="2026-01-26 15:57:04.841490223 +0000 UTC m=+1382.623370616" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.006153 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.009265 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.022381 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.207684 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.207823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjhg\" (UniqueName: \"kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.207972 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.310022 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjhg\" (UniqueName: \"kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.310114 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.310251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.310913 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.310991 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.345940 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjhg\" (UniqueName: \"kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg\") pod \"redhat-operators-s5kql\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:13 crc kubenswrapper[4896]: I0126 15:57:13.634065 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:14 crc kubenswrapper[4896]: I0126 15:57:14.257542 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:57:14 crc kubenswrapper[4896]: I0126 15:57:14.337644 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerStarted","Data":"f928caace627a2c582d3b11f00c8cc273575159db8df3883c26d665963af6afe"} Jan 26 15:57:15 crc kubenswrapper[4896]: I0126 15:57:15.347456 4896 generic.go:334] "Generic (PLEG): container finished" podID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerID="977ce17de12fe539502bd776d098502b2331f7abf786a87b9b738b6e6b19cabb" exitCode=0 Jan 26 15:57:15 crc kubenswrapper[4896]: I0126 15:57:15.347508 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerDied","Data":"977ce17de12fe539502bd776d098502b2331f7abf786a87b9b738b6e6b19cabb"} Jan 26 15:57:15 crc kubenswrapper[4896]: I0126 15:57:15.349661 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:57:19 crc kubenswrapper[4896]: I0126 15:57:19.385173 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerStarted","Data":"77dc8bc0bdcf136dcd23e45fe8de9d93b8d40b5d265cbe4d2da4aa85d122457b"} Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.148356 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.150183 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.151955 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2q47c" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.152879 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.153435 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.153797 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.161839 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.210793 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.213996 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.218275 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.227450 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.332433 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wzv\" (UniqueName: \"kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.332500 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4rc\" (UniqueName: \"kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.332523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.332978 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.333060 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.422720 4896 generic.go:334] "Generic (PLEG): container finished" podID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerID="77dc8bc0bdcf136dcd23e45fe8de9d93b8d40b5d265cbe4d2da4aa85d122457b" exitCode=0 Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.422765 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerDied","Data":"77dc8bc0bdcf136dcd23e45fe8de9d93b8d40b5d265cbe4d2da4aa85d122457b"} Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.434426 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4rc\" (UniqueName: \"kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.434476 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.434554 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.434594 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.434673 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wzv\" (UniqueName: \"kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.435794 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.460703 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wzv\" (UniqueName: \"kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv\") pod \"dnsmasq-dns-675f4bcbfc-9jvtx\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.475092 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.507154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.507153 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.510145 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4rc\" (UniqueName: \"kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc\") pod \"dnsmasq-dns-78dd6ddcc-ltb5t\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:23 crc kubenswrapper[4896]: I0126 15:57:23.809036 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:57:24 crc kubenswrapper[4896]: I0126 15:57:24.044011 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:57:24 crc kubenswrapper[4896]: W0126 15:57:24.062768 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6137dcc5_9fcb_4c50_9de2_8e8e32a77c63.slice/crio-7af0c36ad976b00d03da7dcc080d17402a1e8a776d6e5e4b168b9b4b36e47999 WatchSource:0}: Error finding container 7af0c36ad976b00d03da7dcc080d17402a1e8a776d6e5e4b168b9b4b36e47999: Status 404 returned error can't find the container with id 7af0c36ad976b00d03da7dcc080d17402a1e8a776d6e5e4b168b9b4b36e47999 Jan 26 15:57:24 crc kubenswrapper[4896]: I0126 15:57:24.415783 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:57:24 crc kubenswrapper[4896]: W0126 15:57:24.418445 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode54f72d2_f23b_4fdd_ba0f_6a7a806c3985.slice/crio-3ddf394a8b3e0c157dc23f708f99ab911c616f4e92992a0c4b41fd70b776c93c WatchSource:0}: Error finding container 3ddf394a8b3e0c157dc23f708f99ab911c616f4e92992a0c4b41fd70b776c93c: Status 404 returned error can't find the container with id 3ddf394a8b3e0c157dc23f708f99ab911c616f4e92992a0c4b41fd70b776c93c Jan 26 15:57:24 crc kubenswrapper[4896]: I0126 15:57:24.438130 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" event={"ID":"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985","Type":"ContainerStarted","Data":"3ddf394a8b3e0c157dc23f708f99ab911c616f4e92992a0c4b41fd70b776c93c"} Jan 26 15:57:24 crc kubenswrapper[4896]: I0126 15:57:24.440958 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" event={"ID":"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63","Type":"ContainerStarted","Data":"7af0c36ad976b00d03da7dcc080d17402a1e8a776d6e5e4b168b9b4b36e47999"} Jan 26 15:57:25 crc kubenswrapper[4896]: I0126 15:57:25.456485 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerStarted","Data":"03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217"} Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.078143 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5kql" podStartSLOduration=4.930004095 podStartE2EDuration="14.078121391s" podCreationTimestamp="2026-01-26 15:57:12 +0000 UTC" firstStartedPulling="2026-01-26 15:57:15.34936056 +0000 UTC m=+1393.131240953" lastFinishedPulling="2026-01-26 15:57:24.497477856 +0000 UTC m=+1402.279358249" observedRunningTime="2026-01-26 15:57:25.477641825 +0000 UTC m=+1403.259522228" watchObservedRunningTime="2026-01-26 15:57:26.078121391 +0000 UTC m=+1403.860001784" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.082670 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.118400 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.120200 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.147183 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.295810 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhvg\" (UniqueName: \"kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.295877 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.295940 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.407814 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.408009 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhvg\" (UniqueName: \"kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.408042 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.409149 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.409937 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.440801 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhvg\" (UniqueName: \"kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg\") pod \"dnsmasq-dns-666b6646f7-lj4n6\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.447903 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.544016 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.586014 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.593855 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.604968 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.729041 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9m94\" (UniqueName: \"kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.729134 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.729278 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.830804 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9m94\" (UniqueName: \"kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.831090 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.831138 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.833184 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.840293 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:26 crc kubenswrapper[4896]: I0126 15:57:26.875450 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9m94\" (UniqueName: \"kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94\") pod \"dnsmasq-dns-57d769cc4f-zmgnm\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.050708 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.269772 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.271704 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.275933 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.276181 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.276992 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.277413 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.282089 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xs5tv" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.288045 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.293975 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.306512 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.328396 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.334061 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.340675 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.344913 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.361001 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.362274 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385365 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385451 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghd4l\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385508 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385526 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385543 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385623 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385704 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385760 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385867 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385911 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.385942 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.467207 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497166 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcb6\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497230 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497439 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497508 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497645 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497773 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.497944 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd4l\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498084 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498176 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498318 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498394 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498434 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498535 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498645 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498727 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.498925 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499016 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499071 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499139 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499198 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499219 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499241 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xhd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499344 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499373 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499391 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503024 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503147 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503202 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503251 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503278 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503284 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503399 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503423 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.499678 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.503822 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.504763 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.515124 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.516378 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.521113 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.521152 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdadbc0a71065fa541cba4d5492c6b8b726454b664d989b39a275b7996e6333b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.523909 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd4l\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.527939 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.533433 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606430 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606515 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606543 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606624 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606649 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606665 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606689 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606715 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606744 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606771 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606792 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606810 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8xhd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606844 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606882 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606907 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606952 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.606976 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.607003 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.607027 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.607050 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.607077 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.607115 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcb6\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.609536 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.609919 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.610389 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.611219 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.611810 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.612064 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.612466 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.613488 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.614211 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.617981 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.618847 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.618982 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" event={"ID":"76be1e7b-2d32-4268-befa-a064858bb503","Type":"ContainerStarted","Data":"b03c7c832caaf2c765115b5a73f902f65f95b51e8bb98e75e46796ff5cce40d7"} Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.619259 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.619874 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.622858 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.623318 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.623358 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94c85d97425b250affc0ea1c678ad89fb07fe2d358f8324bcb930f17f72e2721/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.624646 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.625240 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.625271 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c49109239722dabce33dccb586276ae914b08fb55f7760e929dd269f2f12d4c/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.626098 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.638171 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.639481 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.641347 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.641788 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8xhd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.644731 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcb6\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.714704 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.716436 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.727396 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.731433 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.741554 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.741757 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.741899 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.742042 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.742146 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.742342 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-66zhq" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.794196 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.798331 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " pod="openstack/rabbitmq-server-1" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.806896 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817479 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817532 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817556 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817604 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chz8\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817630 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817678 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817713 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817781 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.817840 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.818044 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.907221 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.920681 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.920962 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921191 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921284 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921395 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921624 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921728 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921816 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chz8\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921841 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.921926 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.925451 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.925763 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.926933 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.929493 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.930312 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.934649 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.935225 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.937258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.938681 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.952038 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chz8\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.966235 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.966304 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ad73c85ed4d62ca0cdc37f989da140da4f75d3f6db1d6e7dac21fa29c2e2b14/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.969306 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 15:57:27 crc kubenswrapper[4896]: I0126 15:57:27.982309 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.031191 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.104241 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.638387 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" event={"ID":"153f6a72-6423-4be2-b387-709f64bee0ee","Type":"ContainerStarted","Data":"3c584fc9e93e93dfa34bae1340d58799432cedfb3fb5fb50f959c92182639632"} Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.960078 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.963834 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.970490 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kx5wd" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.972165 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.972348 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.972980 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.974828 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 15:57:28 crc kubenswrapper[4896]: I0126 15:57:28.981497 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.038043 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072642 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072716 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072767 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-kolla-config\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072808 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072872 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-config-data-default\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072905 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.072956 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpcr\" (UniqueName: \"kubernetes.io/projected/78b988fb-f698-4b52-8771-2599b5441229-kube-api-access-hcpcr\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.073005 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78b988fb-f698-4b52-8771-2599b5441229-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.073321 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.084741 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.094208 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 15:57:29 crc kubenswrapper[4896]: W0126 15:57:29.094644 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22577788_39b3_431e_9a18_7a15b8f66045.slice/crio-eff34ca4bca5a2e8820146029990ef0c3f00add20843d555cab93bba335cc87f WatchSource:0}: Error finding container eff34ca4bca5a2e8820146029990ef0c3f00add20843d555cab93bba335cc87f: Status 404 returned error can't find the container with id eff34ca4bca5a2e8820146029990ef0c3f00add20843d555cab93bba335cc87f Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175689 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175765 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-kolla-config\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175806 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175856 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-config-data-default\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175882 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175920 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpcr\" (UniqueName: \"kubernetes.io/projected/78b988fb-f698-4b52-8771-2599b5441229-kube-api-access-hcpcr\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.175966 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78b988fb-f698-4b52-8771-2599b5441229-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.176001 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.177199 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-kolla-config\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.177464 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-config-data-default\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.179080 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78b988fb-f698-4b52-8771-2599b5441229-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.184945 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b988fb-f698-4b52-8771-2599b5441229-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.185053 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.185868 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b988fb-f698-4b52-8771-2599b5441229-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.192595 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.192651 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8477dae18dfe4eb984014f0a74a3f876c765a169f068efad460619b3db3eeb2/globalmount\"" pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.202965 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpcr\" (UniqueName: \"kubernetes.io/projected/78b988fb-f698-4b52-8771-2599b5441229-kube-api-access-hcpcr\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.345681 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3806bf5-d6a0-46d1-bbb5-8aaacb9e678d\") pod \"openstack-galera-0\" (UID: \"78b988fb-f698-4b52-8771-2599b5441229\") " pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.602230 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.668181 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerStarted","Data":"eff34ca4bca5a2e8820146029990ef0c3f00add20843d555cab93bba335cc87f"} Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.669602 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerStarted","Data":"7cf0271e8cecc0204ac73bf78d8fc3806c86d2871e6c813a3be726bb58cfa955"} Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.682782 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerStarted","Data":"825acbd6b7339e0980cab6e0ec051ef5abc137cf0ba62fadf0496601291cf316"} Jan 26 15:57:29 crc kubenswrapper[4896]: I0126 15:57:29.694436 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerStarted","Data":"d5cf626f61879ba9da34e714a6d2567663f05b5e6f47a9b2d9de92f8b0d6de41"} Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.329708 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.367862 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.371474 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.387212 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.387485 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-22tps" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.387750 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.387892 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.418433 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450353 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450556 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450626 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450650 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fb7t\" (UniqueName: \"kubernetes.io/projected/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kube-api-access-4fb7t\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450725 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450851 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.450892 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.451125 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.489539 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.490907 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.498710 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bcjtm" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.498931 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.499088 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.553910 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.554002 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2xb\" (UniqueName: \"kubernetes.io/projected/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kube-api-access-7g2xb\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.554071 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.554190 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.554291 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fb7t\" (UniqueName: \"kubernetes.io/projected/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kube-api-access-4fb7t\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558179 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558249 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558318 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558345 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558371 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-config-data\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558400 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558517 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.558803 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kolla-config\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.559264 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.562269 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.584226 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.584283 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a29f8d586869a53c1db0d6b72e6398371a8424a755b3a7c09475e1ba49c580a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.585281 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.587032 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.589629 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a3e4fe3-b61e-4200-acf9-9ba170d68402-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.617397 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.620878 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e4fe3-b61e-4200-acf9-9ba170d68402-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.627483 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fb7t\" (UniqueName: \"kubernetes.io/projected/7a3e4fe3-b61e-4200-acf9-9ba170d68402-kube-api-access-4fb7t\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.660596 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.660647 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-config-data\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.660755 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kolla-config\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.660801 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2xb\" (UniqueName: \"kubernetes.io/projected/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kube-api-access-7g2xb\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.660855 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.668054 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kolla-config\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.668826 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-config-data\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.677068 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.682969 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.701659 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2xb\" (UniqueName: \"kubernetes.io/projected/83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f-kube-api-access-7g2xb\") pod \"memcached-0\" (UID: \"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f\") " pod="openstack/memcached-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.703350 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15cd1429-603b-45c0-b923-d8a43f46ffb5\") pod \"openstack-cell1-galera-0\" (UID: \"7a3e4fe3-b61e-4200-acf9-9ba170d68402\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.728530 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.735967 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78b988fb-f698-4b52-8771-2599b5441229","Type":"ContainerStarted","Data":"601ae89e5d6e0cc6ed34ee64e173defe6dfd85e39b486e67e92543d8fcfba3cd"} Jan 26 15:57:30 crc kubenswrapper[4896]: I0126 15:57:30.825033 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:57:31 crc kubenswrapper[4896]: I0126 15:57:31.272719 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:57:31 crc kubenswrapper[4896]: I0126 15:57:31.454945 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:57:31 crc kubenswrapper[4896]: W0126 15:57:31.470339 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83b2e80c_4c60_4a3e_a9f3_0ce2af747e4f.slice/crio-b91b96bdbc2af987456c33e630fcf6c3f4ecfcf0a1e88df3ba0872d75971cd66 WatchSource:0}: Error finding container b91b96bdbc2af987456c33e630fcf6c3f4ecfcf0a1e88df3ba0872d75971cd66: Status 404 returned error can't find the container with id b91b96bdbc2af987456c33e630fcf6c3f4ecfcf0a1e88df3ba0872d75971cd66 Jan 26 15:57:31 crc kubenswrapper[4896]: I0126 15:57:31.765996 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7a3e4fe3-b61e-4200-acf9-9ba170d68402","Type":"ContainerStarted","Data":"513b2d1f4a39b1cba8d5ab35d555962e945978a440bdbeaa60cdff0813205ec5"} Jan 26 15:57:31 crc kubenswrapper[4896]: I0126 15:57:31.772360 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f","Type":"ContainerStarted","Data":"b91b96bdbc2af987456c33e630fcf6c3f4ecfcf0a1e88df3ba0872d75971cd66"} Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.347697 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.349468 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.352741 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2stlw" Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.370741 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.422688 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qcl\" (UniqueName: \"kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl\") pod \"kube-state-metrics-0\" (UID: \"68bd2b73-99a8-427d-a4bf-2648d7580be8\") " pod="openstack/kube-state-metrics-0" Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.632966 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4qcl\" (UniqueName: \"kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl\") pod \"kube-state-metrics-0\" (UID: \"68bd2b73-99a8-427d-a4bf-2648d7580be8\") " pod="openstack/kube-state-metrics-0" Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.697266 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4qcl\" (UniqueName: \"kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl\") pod \"kube-state-metrics-0\" (UID: \"68bd2b73-99a8-427d-a4bf-2648d7580be8\") " pod="openstack/kube-state-metrics-0" Jan 26 15:57:32 crc kubenswrapper[4896]: I0126 15:57:32.731838 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.494159 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s"] Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.495543 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.499594 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.499610 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-6qnj5" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.517573 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s"] Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.635862 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.637815 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.694091 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl29n\" (UniqueName: \"kubernetes.io/projected/561e12b6-a1fb-407f-ae57-6a28f00f9093-kube-api-access-fl29n\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.694187 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561e12b6-a1fb-407f-ae57-6a28f00f9093-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.813098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561e12b6-a1fb-407f-ae57-6a28f00f9093-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.813451 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl29n\" (UniqueName: \"kubernetes.io/projected/561e12b6-a1fb-407f-ae57-6a28f00f9093-kube-api-access-fl29n\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.864073 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561e12b6-a1fb-407f-ae57-6a28f00f9093-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.874466 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl29n\" (UniqueName: \"kubernetes.io/projected/561e12b6-a1fb-407f-ae57-6a28f00f9093-kube-api-access-fl29n\") pod \"observability-ui-dashboards-66cbf594b5-4s75s\" (UID: \"561e12b6-a1fb-407f-ae57-6a28f00f9093\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.989167 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-744899955f-8w6qt"] Jan 26 15:57:33 crc kubenswrapper[4896]: I0126 15:57:33.990610 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.019640 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.022506 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.025085 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744899955f-8w6qt"] Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.026708 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.026957 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.028440 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.028687 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.028829 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.028969 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.029101 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-65lrl" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.038562 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.131369 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.189892 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222705 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222767 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222816 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222836 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222860 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-trusted-ca-bundle\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222915 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq79t\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222935 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4wv\" (UniqueName: \"kubernetes.io/projected/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-kube-api-access-qs4wv\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222956 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-oauth-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.222981 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223011 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-oauth-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223034 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223087 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223112 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-service-ca\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223143 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223161 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223182 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.223219 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325753 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-service-ca\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325805 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325834 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325860 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325897 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325924 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325952 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.325990 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326011 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326065 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-trusted-ca-bundle\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326106 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq79t\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326170 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs4wv\" (UniqueName: \"kubernetes.io/projected/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-kube-api-access-qs4wv\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326212 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-oauth-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326232 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326263 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-oauth-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326378 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326428 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.326921 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-service-ca\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.327514 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.328218 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.329349 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.329776 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-trusted-ca-bundle\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.330942 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.331314 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.336393 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.336789 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.336831 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d76c98445a8aeb600bd17b48c80d5d093689356561d52f83a0ea51fc24e48e6c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.337091 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-oauth-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.337495 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-serving-cert\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.337658 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.338212 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.338733 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-console-oauth-config\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.342635 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.349012 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq79t\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.357533 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs4wv\" (UniqueName: \"kubernetes.io/projected/da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c-kube-api-access-qs4wv\") pod \"console-744899955f-8w6qt\" (UID: \"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c\") " pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.405782 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.470181 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.498177 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:57:34 crc kubenswrapper[4896]: I0126 15:57:34.774532 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5kql" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" probeResult="failure" output=< Jan 26 15:57:34 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:57:34 crc kubenswrapper[4896]: > Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.823746 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9bzf"] Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.825900 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.829198 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4sdzp" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.830170 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.831439 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.838808 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.838860 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8k6\" (UniqueName: \"kubernetes.io/projected/f24a2e9c-671f-48a5-a5f5-55b864b17d19-kube-api-access-fw8k6\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.838931 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-log-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.838950 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-ovn-controller-tls-certs\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.838970 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f24a2e9c-671f-48a5-a5f5-55b864b17d19-scripts\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.839043 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-combined-ca-bundle\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.839108 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.839226 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf"] Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.884392 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hlm9m"] Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.889215 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.895359 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlm9m"] Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.914432 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.920775 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.933681 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.933867 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2xpds" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.933958 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.934064 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.934839 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.940647 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-combined-ca-bundle\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.940757 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-etc-ovs\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.941406 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.940857 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942340 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-log\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942441 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942490 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8k6\" (UniqueName: \"kubernetes.io/projected/f24a2e9c-671f-48a5-a5f5-55b864b17d19-kube-api-access-fw8k6\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942535 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-lib\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942611 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-log-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942657 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-ovn-controller-tls-certs\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942727 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f24a2e9c-671f-48a5-a5f5-55b864b17d19-scripts\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942748 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97576f52-b567-4334-844b-bd9ae73a82b7-scripts\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942800 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-run\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.942897 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52vr\" (UniqueName: \"kubernetes.io/projected/97576f52-b567-4334-844b-bd9ae73a82b7-kube-api-access-g52vr\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.943265 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-log-ovn\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.943379 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f24a2e9c-671f-48a5-a5f5-55b864b17d19-var-run\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.945271 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f24a2e9c-671f-48a5-a5f5-55b864b17d19-scripts\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.946497 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-ovn-controller-tls-certs\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.948468 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f24a2e9c-671f-48a5-a5f5-55b864b17d19-combined-ca-bundle\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:36 crc kubenswrapper[4896]: I0126 15:57:36.963568 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8k6\" (UniqueName: \"kubernetes.io/projected/f24a2e9c-671f-48a5-a5f5-55b864b17d19-kube-api-access-fw8k6\") pod \"ovn-controller-c9bzf\" (UID: \"f24a2e9c-671f-48a5-a5f5-55b864b17d19\") " pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.015771 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.044414 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52vr\" (UniqueName: \"kubernetes.io/projected/97576f52-b567-4334-844b-bd9ae73a82b7-kube-api-access-g52vr\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.044796 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045486 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghp82\" (UniqueName: \"kubernetes.io/projected/56852f16-c116-4ec4-b22f-16952ac363b3-kube-api-access-ghp82\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045525 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-etc-ovs\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045556 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045605 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045651 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-log\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045693 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045753 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045786 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045820 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-lib\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045890 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97576f52-b567-4334-844b-bd9ae73a82b7-scripts\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045930 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-run\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.045957 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.046306 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-etc-ovs\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.046366 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-log\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.046516 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-lib\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.046640 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97576f52-b567-4334-844b-bd9ae73a82b7-var-run\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.051802 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97576f52-b567-4334-844b-bd9ae73a82b7-scripts\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.074558 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52vr\" (UniqueName: \"kubernetes.io/projected/97576f52-b567-4334-844b-bd9ae73a82b7-kube-api-access-g52vr\") pod \"ovn-controller-ovs-hlm9m\" (UID: \"97576f52-b567-4334-844b-bd9ae73a82b7\") " pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.153897 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghp82\" (UniqueName: \"kubernetes.io/projected/56852f16-c116-4ec4-b22f-16952ac363b3-kube-api-access-ghp82\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.153987 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154074 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154151 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154216 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154362 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.154464 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.155216 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.156425 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-config\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.157845 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.159518 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.159546 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ab7181b7e60e80e67bed36093898d850a65ad1ed061170bd0d02ec66fdcc6f2e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.172356 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56852f16-c116-4ec4-b22f-16952ac363b3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.178968 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.179006 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghp82\" (UniqueName: \"kubernetes.io/projected/56852f16-c116-4ec4-b22f-16952ac363b3-kube-api-access-ghp82\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.182769 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.194987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56852f16-c116-4ec4-b22f-16952ac363b3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.212117 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.225541 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475450c4-8c9c-42ea-aec6-d030b3f7a931\") pod \"ovsdbserver-nb-0\" (UID: \"56852f16-c116-4ec4-b22f-16952ac363b3\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:37 crc kubenswrapper[4896]: I0126 15:57:37.348092 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.935319 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.938316 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.940222 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4z4dv" Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.943170 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.954904 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.959778 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:57:39 crc kubenswrapper[4896]: I0126 15:57:39.963690 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.130807 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h8g\" (UniqueName: \"kubernetes.io/projected/97b874a0-24c5-4c30-ae9b-33b380c5a99b-kube-api-access-b6h8g\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131163 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131200 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131221 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131234 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131275 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-config\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131306 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.131339 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234025 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234113 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234143 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234165 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234222 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-config\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234266 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234312 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.234433 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h8g\" (UniqueName: \"kubernetes.io/projected/97b874a0-24c5-4c30-ae9b-33b380c5a99b-kube-api-access-b6h8g\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.236025 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.242506 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b874a0-24c5-4c30-ae9b-33b380c5a99b-config\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.248499 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.251516 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.251983 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97b874a0-24c5-4c30-ae9b-33b380c5a99b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.271717 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h8g\" (UniqueName: \"kubernetes.io/projected/97b874a0-24c5-4c30-ae9b-33b380c5a99b-kube-api-access-b6h8g\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.276017 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.276071 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db3258e2e19fd850b235e22d91793363344486756fac66e04b4aa5ae932b271b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.280397 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b874a0-24c5-4c30-ae9b-33b380c5a99b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.308074 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.323154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-823928c5-5f39-4cdc-97d4-fc706d33f3d4\") pod \"ovsdbserver-sb-0\" (UID: \"97b874a0-24c5-4c30-ae9b-33b380c5a99b\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.622924 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:57:40 crc kubenswrapper[4896]: I0126 15:57:40.670563 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s"] Jan 26 15:57:40 crc kubenswrapper[4896]: W0126 15:57:40.706479 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod561e12b6_a1fb_407f_ae57_6a28f00f9093.slice/crio-d5af951122f8f9d79a4648d95f1fe5168efeeaa51ce2c7fc7085862541c16e52 WatchSource:0}: Error finding container d5af951122f8f9d79a4648d95f1fe5168efeeaa51ce2c7fc7085862541c16e52: Status 404 returned error can't find the container with id d5af951122f8f9d79a4648d95f1fe5168efeeaa51ce2c7fc7085862541c16e52 Jan 26 15:57:41 crc kubenswrapper[4896]: I0126 15:57:41.006238 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf"] Jan 26 15:57:41 crc kubenswrapper[4896]: I0126 15:57:41.023280 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744899955f-8w6qt"] Jan 26 15:57:41 crc kubenswrapper[4896]: W0126 15:57:41.031781 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda0ad2c4_81d7_4e3c_bd6f_250b3c96e55c.slice/crio-18fea03c557525f6fbf72543464f1bac00a41fe6ff7b8137235830de530167f7 WatchSource:0}: Error finding container 18fea03c557525f6fbf72543464f1bac00a41fe6ff7b8137235830de530167f7: Status 404 returned error can't find the container with id 18fea03c557525f6fbf72543464f1bac00a41fe6ff7b8137235830de530167f7 Jan 26 15:57:41 crc kubenswrapper[4896]: I0126 15:57:41.063927 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" event={"ID":"561e12b6-a1fb-407f-ae57-6a28f00f9093","Type":"ContainerStarted","Data":"d5af951122f8f9d79a4648d95f1fe5168efeeaa51ce2c7fc7085862541c16e52"} Jan 26 15:57:41 crc kubenswrapper[4896]: I0126 15:57:41.065936 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"68bd2b73-99a8-427d-a4bf-2648d7580be8","Type":"ContainerStarted","Data":"31d5b0ab67b035817394ac73de0ba888304bc8aea87a948947e65d6916753b12"} Jan 26 15:57:41 crc kubenswrapper[4896]: I0126 15:57:41.564152 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:57:42 crc kubenswrapper[4896]: I0126 15:57:42.083423 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf" event={"ID":"f24a2e9c-671f-48a5-a5f5-55b864b17d19","Type":"ContainerStarted","Data":"d279d54ef1f28631d2773f86353fed5107f7ba24224bbb867160a7ea01736b1f"} Jan 26 15:57:42 crc kubenswrapper[4896]: I0126 15:57:42.093048 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerStarted","Data":"5ad692ee5d11f75d37aba10777586e86b14b770c84deb492c3f52463a0d6bccc"} Jan 26 15:57:42 crc kubenswrapper[4896]: I0126 15:57:42.110062 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744899955f-8w6qt" event={"ID":"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c","Type":"ContainerStarted","Data":"18fea03c557525f6fbf72543464f1bac00a41fe6ff7b8137235830de530167f7"} Jan 26 15:57:44 crc kubenswrapper[4896]: I0126 15:57:44.218011 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlm9m"] Jan 26 15:57:44 crc kubenswrapper[4896]: I0126 15:57:44.722357 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5kql" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" probeResult="failure" output=< Jan 26 15:57:44 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:57:44 crc kubenswrapper[4896]: > Jan 26 15:57:44 crc kubenswrapper[4896]: I0126 15:57:44.800127 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:57:45 crc kubenswrapper[4896]: I0126 15:57:45.656205 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:57:48 crc kubenswrapper[4896]: I0126 15:57:48.814341 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:48 crc kubenswrapper[4896]: I0126 15:57:48.814657 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:49 crc kubenswrapper[4896]: I0126 15:57:49.175313 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744899955f-8w6qt" event={"ID":"da0ad2c4-81d7-4e3c-bd6f-250b3c96e55c","Type":"ContainerStarted","Data":"e5fed560cf2195f1a7c227e6a4d95ed51228df0f17419e676f2ff99d0b4aa6bc"} Jan 26 15:57:49 crc kubenswrapper[4896]: I0126 15:57:49.199773 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-744899955f-8w6qt" podStartSLOduration=16.199751988 podStartE2EDuration="16.199751988s" podCreationTimestamp="2026-01-26 15:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:57:49.198017686 +0000 UTC m=+1426.979898079" watchObservedRunningTime="2026-01-26 15:57:49.199751988 +0000 UTC m=+1426.981632391" Jan 26 15:57:52 crc kubenswrapper[4896]: W0126 15:57:52.979415 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97576f52_b567_4334_844b_bd9ae73a82b7.slice/crio-169ce54c0630baf9248742759665b48c750e0129a419b356e8800e60fa459ffe WatchSource:0}: Error finding container 169ce54c0630baf9248742759665b48c750e0129a419b356e8800e60fa459ffe: Status 404 returned error can't find the container with id 169ce54c0630baf9248742759665b48c750e0129a419b356e8800e60fa459ffe Jan 26 15:57:53 crc kubenswrapper[4896]: I0126 15:57:53.216007 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlm9m" event={"ID":"97576f52-b567-4334-844b-bd9ae73a82b7","Type":"ContainerStarted","Data":"169ce54c0630baf9248742759665b48c750e0129a419b356e8800e60fa459ffe"} Jan 26 15:57:53 crc kubenswrapper[4896]: I0126 15:57:53.218154 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"56852f16-c116-4ec4-b22f-16952ac363b3","Type":"ContainerStarted","Data":"29b4ef351ef785dd29b88d51673e86f45a5794e6d5943c1a1111de0a60332cfc"} Jan 26 15:57:53 crc kubenswrapper[4896]: I0126 15:57:53.220034 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"97b874a0-24c5-4c30-ae9b-33b380c5a99b","Type":"ContainerStarted","Data":"b46d9a7b2566939dc40a420c9d6acdf5e0d038ecec251ab76e913e74450dc7e1"} Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.326925 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.327186 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6chz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a13f72f8-afaf-4e0f-b76b-342e5391579c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.328384 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.334221 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.334523 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8xhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(22577788-39b3-431e-9a18-7a15b8f66045): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:57:54 crc kubenswrapper[4896]: E0126 15:57:54.335962 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="22577788-39b3-431e-9a18-7a15b8f66045" Jan 26 15:57:54 crc kubenswrapper[4896]: I0126 15:57:54.471197 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:54 crc kubenswrapper[4896]: I0126 15:57:54.471506 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:54 crc kubenswrapper[4896]: I0126 15:57:54.479298 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:54 crc kubenswrapper[4896]: I0126 15:57:54.690330 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5kql" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" probeResult="failure" output=< Jan 26 15:57:54 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 15:57:54 crc kubenswrapper[4896]: > Jan 26 15:57:55 crc kubenswrapper[4896]: E0126 15:57:55.244455 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" Jan 26 15:57:55 crc kubenswrapper[4896]: E0126 15:57:55.244570 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="22577788-39b3-431e-9a18-7a15b8f66045" Jan 26 15:57:55 crc kubenswrapper[4896]: I0126 15:57:55.245825 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-744899955f-8w6qt" Jan 26 15:57:55 crc kubenswrapper[4896]: I0126 15:57:55.360239 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:58:00 crc kubenswrapper[4896]: E0126 15:58:00.532512 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 15:58:00 crc kubenswrapper[4896]: E0126 15:58:00.533249 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghd4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(45b5821a-5c82-485e-ade4-f6de2aea62d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:00 crc kubenswrapper[4896]: E0126 15:58:00.534458 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" Jan 26 15:58:01 crc kubenswrapper[4896]: E0126 15:58:01.295064 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" Jan 26 15:58:03 crc kubenswrapper[4896]: I0126 15:58:03.690817 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:58:03 crc kubenswrapper[4896]: I0126 15:58:03.751728 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:58:03 crc kubenswrapper[4896]: I0126 15:58:03.941437 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:58:05 crc kubenswrapper[4896]: I0126 15:58:05.424615 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5kql" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" containerID="cri-o://03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" gracePeriod=2 Jan 26 15:58:11 crc kubenswrapper[4896]: I0126 15:58:11.923276 4896 generic.go:334] "Generic (PLEG): container finished" podID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerID="03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" exitCode=0 Jan 26 15:58:11 crc kubenswrapper[4896]: I0126 15:58:11.923962 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerDied","Data":"03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217"} Jan 26 15:58:13 crc kubenswrapper[4896]: E0126 15:58:13.636305 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217 is running failed: container process not found" containerID="03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:58:13 crc kubenswrapper[4896]: E0126 15:58:13.637979 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217 is running failed: container process not found" containerID="03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:58:13 crc kubenswrapper[4896]: E0126 15:58:13.638632 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217 is running failed: container process not found" containerID="03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 15:58:13 crc kubenswrapper[4896]: E0126 15:58:13.638673 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-s5kql" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.208556 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.208751 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl29n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-4s75s_openshift-operators(561e12b6-a1fb-407f-ae57-6a28f00f9093): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.210019 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" podUID="561e12b6-a1fb-407f-ae57-6a28f00f9093" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.236220 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.236409 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcpcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(78b988fb-f698-4b52-8771-2599b5441229): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.237786 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="78b988fb-f698-4b52-8771-2599b5441229" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.238741 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.239098 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dhc4h87h56h645h56dh58fh564h544h588h85h67bh644h54bh56dh5dh594hd6hffh5f5h8h5f8h7h644h5ffh664h699h546h54bh5d8h574hc5q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghp82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(56852f16-c116-4ec4-b22f-16952ac363b3): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\": context canceled" logger="UnhandledError" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.317437 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" podUID="561e12b6-a1fb-407f-ae57-6a28f00f9093" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.988595 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.988787 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n684h5d8h679h5d7h67fh65h5h5c5h674h5bdh654h685h5dch59h584h664h99hchc9h5c6hdch697h5f5h677h5bh577h695h575hcfhc4h64h89q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g2xb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:14 crc kubenswrapper[4896]: E0126 15:58:14.991691 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f" Jan 26 15:58:15 crc kubenswrapper[4896]: E0126 15:58:15.326148 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f" Jan 26 15:58:16 crc kubenswrapper[4896]: E0126 15:58:16.540788 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 26 15:58:16 crc kubenswrapper[4896]: E0126 15:58:16.541419 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n568h677hddh5fchb6h659h669h67bhb4h567hf6h57bh694h5fbh57fh688h5cbh65dh656h685hbbh567h6bh544h68ch68dh55dhb9hdh655hcbh5fdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g52vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-hlm9m_openstack(97576f52-b567-4334-844b-bd9ae73a82b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:16 crc kubenswrapper[4896]: E0126 15:58:16.542701 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-hlm9m" podUID="97576f52-b567-4334-844b-bd9ae73a82b7" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.341837 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-hlm9m" podUID="97576f52-b567-4334-844b-bd9ae73a82b7" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.629286 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.629461 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7wzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9jvtx_openstack(6137dcc5-9fcb-4c50-9de2-8e8e32a77c63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.630704 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" podUID="6137dcc5-9fcb-4c50-9de2-8e8e32a77c63" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.656276 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.656829 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlhvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-lj4n6_openstack(76be1e7b-2d32-4268-befa-a064858bb503): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.658116 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" podUID="76be1e7b-2d32-4268-befa-a064858bb503" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.758775 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.759302 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7d4rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ltb5t_openstack(e54f72d2-f23b-4fdd-ba0f-6a7a806c3985): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.760827 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" podUID="e54f72d2-f23b-4fdd-ba0f-6a7a806c3985" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.791340 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.791611 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9m94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zmgnm_openstack(153f6a72-6423-4be2-b387-709f64bee0ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:17 crc kubenswrapper[4896]: E0126 15:58:17.792858 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" podUID="153f6a72-6423-4be2-b387-709f64bee0ee" Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.814341 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.841095 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities\") pod \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.841163 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xjhg\" (UniqueName: \"kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg\") pod \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.841196 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content\") pod \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\" (UID: \"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5\") " Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.842343 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities" (OuterVolumeSpecName: "utilities") pod "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" (UID: "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.855590 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg" (OuterVolumeSpecName: "kube-api-access-9xjhg") pod "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" (UID: "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5"). InnerVolumeSpecName "kube-api-access-9xjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.943203 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:17 crc kubenswrapper[4896]: I0126 15:58:17.943233 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xjhg\" (UniqueName: \"kubernetes.io/projected/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-kube-api-access-9xjhg\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.070990 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" (UID: "7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.163021 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.358041 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5kql" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.359803 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5kql" event={"ID":"7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5","Type":"ContainerDied","Data":"f928caace627a2c582d3b11f00c8cc273575159db8df3883c26d665963af6afe"} Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.359891 4896 scope.go:117] "RemoveContainer" containerID="03f73d5054b07ea833a0cfcb49ff90f173d9847653265a024decf0fe7d90d217" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.362212 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" podUID="153f6a72-6423-4be2-b387-709f64bee0ee" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.366219 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" podUID="76be1e7b-2d32-4268-befa-a064858bb503" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.511155 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.523489 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5kql"] Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.590242 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.591272 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n568h677hddh5fchb6h659h669h67bhb4h567hf6h57bh694h5fbh57fh688h5cbh65dh656h685hbbh567h6bh544h68ch68dh55dhb9hdh655hcbh5fdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw8k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-c9bzf_openstack(f24a2e9c-671f-48a5-a5f5-55b864b17d19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.592539 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-c9bzf" podUID="f24a2e9c-671f-48a5-a5f5-55b864b17d19" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.774959 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" path="/var/lib/kubelet/pods/7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5/volumes" Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.813616 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:58:18 crc kubenswrapper[4896]: I0126 15:58:18.813683 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.929476 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Jan 26 15:58:18 crc kubenswrapper[4896]: E0126 15:58:18.929709 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nddh5bfhd7h6dhf6h58ch697h64bh546h5c8h5dh54dhc4h66dh88hdbh689h675hf4h57dhc8h5d5h5bfh56ch77h66bh94h64bh657h58dh87h78q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6h8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(97b874a0-24c5-4c30-ae9b-33b380c5a99b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:58:19 crc kubenswrapper[4896]: E0126 15:58:19.365982 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-c9bzf" podUID="f24a2e9c-671f-48a5-a5f5-55b864b17d19" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.475699 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.484407 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.606131 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d4rc\" (UniqueName: \"kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc\") pod \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.606247 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc\") pod \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.606309 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config\") pod \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\" (UID: \"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985\") " Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.606389 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config\") pod \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.606509 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7wzv\" (UniqueName: \"kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv\") pod \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\" (UID: \"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63\") " Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.607550 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985" (UID: "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.607668 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config" (OuterVolumeSpecName: "config") pod "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985" (UID: "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.608278 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config" (OuterVolumeSpecName: "config") pod "6137dcc5-9fcb-4c50-9de2-8e8e32a77c63" (UID: "6137dcc5-9fcb-4c50-9de2-8e8e32a77c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.612616 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc" (OuterVolumeSpecName: "kube-api-access-7d4rc") pod "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985" (UID: "e54f72d2-f23b-4fdd-ba0f-6a7a806c3985"). InnerVolumeSpecName "kube-api-access-7d4rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.612956 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv" (OuterVolumeSpecName: "kube-api-access-c7wzv") pod "6137dcc5-9fcb-4c50-9de2-8e8e32a77c63" (UID: "6137dcc5-9fcb-4c50-9de2-8e8e32a77c63"). InnerVolumeSpecName "kube-api-access-c7wzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.709356 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d4rc\" (UniqueName: \"kubernetes.io/projected/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-kube-api-access-7d4rc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.709399 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.709412 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.709423 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:19 crc kubenswrapper[4896]: I0126 15:58:19.709440 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7wzv\" (UniqueName: \"kubernetes.io/projected/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63-kube-api-access-c7wzv\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.379455 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" event={"ID":"e54f72d2-f23b-4fdd-ba0f-6a7a806c3985","Type":"ContainerDied","Data":"3ddf394a8b3e0c157dc23f708f99ab911c616f4e92992a0c4b41fd70b776c93c"} Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.379628 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ltb5t" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.384661 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" event={"ID":"6137dcc5-9fcb-4c50-9de2-8e8e32a77c63","Type":"ContainerDied","Data":"7af0c36ad976b00d03da7dcc080d17402a1e8a776d6e5e4b168b9b4b36e47999"} Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.384749 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9jvtx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.433343 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5cfccffc99-hcln8" podUID="d87cc614-885a-440d-8310-bd22b599a383" containerName="console" containerID="cri-o://5ea2589e4686a46d835d293ed5cfd303adf11a3abb4e1a254518f7daa1744cb3" gracePeriod=15 Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.478786 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.492462 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ltb5t"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.514992 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.524805 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9jvtx"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.739213 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5bgbx"] Jan 26 15:58:20 crc kubenswrapper[4896]: E0126 15:58:20.740095 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.740118 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" Jan 26 15:58:20 crc kubenswrapper[4896]: E0126 15:58:20.740137 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="extract-content" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.740144 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="extract-content" Jan 26 15:58:20 crc kubenswrapper[4896]: E0126 15:58:20.740153 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="extract-utilities" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.740159 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="extract-utilities" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.740385 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4855ce-14a2-44a6-b7dc-2dd0f95d32e5" containerName="registry-server" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.741404 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.744412 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.771882 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6137dcc5-9fcb-4c50-9de2-8e8e32a77c63" path="/var/lib/kubelet/pods/6137dcc5-9fcb-4c50-9de2-8e8e32a77c63/volumes" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.772321 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54f72d2-f23b-4fdd-ba0f-6a7a806c3985" path="/var/lib/kubelet/pods/e54f72d2-f23b-4fdd-ba0f-6a7a806c3985/volumes" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.772723 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5bgbx"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844303 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovs-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844427 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6152e32-f156-409d-99fb-13e07813a47e-config\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844464 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844500 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovn-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844601 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5m7d\" (UniqueName: \"kubernetes.io/projected/f6152e32-f156-409d-99fb-13e07813a47e-kube-api-access-z5m7d\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.844662 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-combined-ca-bundle\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.928542 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949530 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5m7d\" (UniqueName: \"kubernetes.io/projected/f6152e32-f156-409d-99fb-13e07813a47e-kube-api-access-z5m7d\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949614 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-combined-ca-bundle\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949687 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovs-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949797 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6152e32-f156-409d-99fb-13e07813a47e-config\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949819 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.949842 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovn-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.950176 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovn-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.950216 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f6152e32-f156-409d-99fb-13e07813a47e-ovs-rundir\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.950949 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6152e32-f156-409d-99fb-13e07813a47e-config\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.955487 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.955794 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6152e32-f156-409d-99fb-13e07813a47e-combined-ca-bundle\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.990132 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:20 crc kubenswrapper[4896]: I0126 15:58:20.995411 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5m7d\" (UniqueName: \"kubernetes.io/projected/f6152e32-f156-409d-99fb-13e07813a47e-kube-api-access-z5m7d\") pod \"ovn-controller-metrics-5bgbx\" (UID: \"f6152e32-f156-409d-99fb-13e07813a47e\") " pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.015991 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.019394 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.205313 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5bgbx" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.236512 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.307970 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.308012 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.308056 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.308102 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45fzk\" (UniqueName: \"kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.398228 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cfccffc99-hcln8_d87cc614-885a-440d-8310-bd22b599a383/console/0.log" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.398697 4896 generic.go:334] "Generic (PLEG): container finished" podID="d87cc614-885a-440d-8310-bd22b599a383" containerID="5ea2589e4686a46d835d293ed5cfd303adf11a3abb4e1a254518f7daa1744cb3" exitCode=2 Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.398745 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfccffc99-hcln8" event={"ID":"d87cc614-885a-440d-8310-bd22b599a383","Type":"ContainerDied","Data":"5ea2589e4686a46d835d293ed5cfd303adf11a3abb4e1a254518f7daa1744cb3"} Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.410395 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.410463 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.410522 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.410612 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45fzk\" (UniqueName: \"kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.411932 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.412556 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.413347 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.433115 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45fzk\" (UniqueName: \"kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk\") pod \"dnsmasq-dns-7fd796d7df-f8swm\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.564465 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.611775 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.636737 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.639318 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.641477 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.679965 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.827544 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.827761 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.827834 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.827857 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.828314 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj5x\" (UniqueName: \"kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.933170 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.933257 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.933282 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.933416 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj5x\" (UniqueName: \"kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.933473 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.935025 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.935261 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.937055 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.941327 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:21 crc kubenswrapper[4896]: I0126 15:58:21.953514 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj5x\" (UniqueName: \"kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x\") pod \"dnsmasq-dns-86db49b7ff-9x6sn\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.030601 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.178898 4896 scope.go:117] "RemoveContainer" containerID="77dc8bc0bdcf136dcd23e45fe8de9d93b8d40b5d265cbe4d2da4aa85d122457b" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.594099 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.611024 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" event={"ID":"76be1e7b-2d32-4268-befa-a064858bb503","Type":"ContainerDied","Data":"b03c7c832caaf2c765115b5a73f902f65f95b51e8bb98e75e46796ff5cce40d7"} Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.611079 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03c7c832caaf2c765115b5a73f902f65f95b51e8bb98e75e46796ff5cce40d7" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.614829 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.617764 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" event={"ID":"153f6a72-6423-4be2-b387-709f64bee0ee","Type":"ContainerDied","Data":"3c584fc9e93e93dfa34bae1340d58799432cedfb3fb5fb50f959c92182639632"} Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.617803 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zmgnm" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.709240 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc\") pod \"153f6a72-6423-4be2-b387-709f64bee0ee\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.709404 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config\") pod \"153f6a72-6423-4be2-b387-709f64bee0ee\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.709435 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9m94\" (UniqueName: \"kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94\") pod \"153f6a72-6423-4be2-b387-709f64bee0ee\" (UID: \"153f6a72-6423-4be2-b387-709f64bee0ee\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.711026 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config" (OuterVolumeSpecName: "config") pod "153f6a72-6423-4be2-b387-709f64bee0ee" (UID: "153f6a72-6423-4be2-b387-709f64bee0ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.711044 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "153f6a72-6423-4be2-b387-709f64bee0ee" (UID: "153f6a72-6423-4be2-b387-709f64bee0ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.737450 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94" (OuterVolumeSpecName: "kube-api-access-r9m94") pod "153f6a72-6423-4be2-b387-709f64bee0ee" (UID: "153f6a72-6423-4be2-b387-709f64bee0ee"). InnerVolumeSpecName "kube-api-access-r9m94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.812054 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config\") pod \"76be1e7b-2d32-4268-befa-a064858bb503\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.812524 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlhvg\" (UniqueName: \"kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg\") pod \"76be1e7b-2d32-4268-befa-a064858bb503\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.812768 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc\") pod \"76be1e7b-2d32-4268-befa-a064858bb503\" (UID: \"76be1e7b-2d32-4268-befa-a064858bb503\") " Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.814227 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76be1e7b-2d32-4268-befa-a064858bb503" (UID: "76be1e7b-2d32-4268-befa-a064858bb503"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.819347 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg" (OuterVolumeSpecName: "kube-api-access-tlhvg") pod "76be1e7b-2d32-4268-befa-a064858bb503" (UID: "76be1e7b-2d32-4268-befa-a064858bb503"). InnerVolumeSpecName "kube-api-access-tlhvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.819445 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.819508 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153f6a72-6423-4be2-b387-709f64bee0ee-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.820475 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9m94\" (UniqueName: \"kubernetes.io/projected/153f6a72-6423-4be2-b387-709f64bee0ee-kube-api-access-r9m94\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.820916 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config" (OuterVolumeSpecName: "config") pod "76be1e7b-2d32-4268-befa-a064858bb503" (UID: "76be1e7b-2d32-4268-befa-a064858bb503"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.922100 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.922140 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76be1e7b-2d32-4268-befa-a064858bb503-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.922152 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlhvg\" (UniqueName: \"kubernetes.io/projected/76be1e7b-2d32-4268-befa-a064858bb503-kube-api-access-tlhvg\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.975613 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:58:22 crc kubenswrapper[4896]: I0126 15:58:22.992818 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zmgnm"] Jan 26 15:58:23 crc kubenswrapper[4896]: I0126 15:58:23.628360 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lj4n6" Jan 26 15:58:23 crc kubenswrapper[4896]: I0126 15:58:23.701195 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:58:23 crc kubenswrapper[4896]: I0126 15:58:23.715770 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lj4n6"] Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.009667 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cfccffc99-hcln8_d87cc614-885a-440d-8310-bd22b599a383/console/0.log" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.009757 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.204793 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlz2l\" (UniqueName: \"kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.204895 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205010 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205079 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205128 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205197 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205266 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config\") pod \"d87cc614-885a-440d-8310-bd22b599a383\" (UID: \"d87cc614-885a-440d-8310-bd22b599a383\") " Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.205982 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config" (OuterVolumeSpecName: "console-config") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.206005 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca" (OuterVolumeSpecName: "service-ca") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.206015 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.206105 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.210735 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l" (OuterVolumeSpecName: "kube-api-access-nlz2l") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "kube-api-access-nlz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.210948 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.211108 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d87cc614-885a-440d-8310-bd22b599a383" (UID: "d87cc614-885a-440d-8310-bd22b599a383"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308384 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlz2l\" (UniqueName: \"kubernetes.io/projected/d87cc614-885a-440d-8310-bd22b599a383-kube-api-access-nlz2l\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308803 4896 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308818 4896 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308831 4896 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308846 4896 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308858 4896 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d87cc614-885a-440d-8310-bd22b599a383-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.308871 4896 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d87cc614-885a-440d-8310-bd22b599a383-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.490637 4896 scope.go:117] "RemoveContainer" containerID="977ce17de12fe539502bd776d098502b2331f7abf786a87b9b738b6e6b19cabb" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.640564 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cfccffc99-hcln8_d87cc614-885a-440d-8310-bd22b599a383/console/0.log" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.640711 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfccffc99-hcln8" event={"ID":"d87cc614-885a-440d-8310-bd22b599a383","Type":"ContainerDied","Data":"a33936c88a2a578362a964e1064490121b7ee517ebfed1c72251932336b2fea1"} Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.640733 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfccffc99-hcln8" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.790035 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153f6a72-6423-4be2-b387-709f64bee0ee" path="/var/lib/kubelet/pods/153f6a72-6423-4be2-b387-709f64bee0ee/volumes" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.791024 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76be1e7b-2d32-4268-befa-a064858bb503" path="/var/lib/kubelet/pods/76be1e7b-2d32-4268-befa-a064858bb503/volumes" Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.791543 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.792724 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5cfccffc99-hcln8"] Jan 26 15:58:24 crc kubenswrapper[4896]: I0126 15:58:24.806519 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5bgbx"] Jan 26 15:58:25 crc kubenswrapper[4896]: I0126 15:58:25.055690 4896 scope.go:117] "RemoveContainer" containerID="5ea2589e4686a46d835d293ed5cfd303adf11a3abb4e1a254518f7daa1744cb3" Jan 26 15:58:25 crc kubenswrapper[4896]: W0126 15:58:25.063158 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6152e32_f156_409d_99fb_13e07813a47e.slice/crio-2ea37afe0520d739f4f6577abed91b6e053c10765e8449a897ee6a6ebdeb071e WatchSource:0}: Error finding container 2ea37afe0520d739f4f6577abed91b6e053c10765e8449a897ee6a6ebdeb071e: Status 404 returned error can't find the container with id 2ea37afe0520d739f4f6577abed91b6e053c10765e8449a897ee6a6ebdeb071e Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:25.340747 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f\": context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:25.341491 4896 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f\": context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:25.341682 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4qcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(68bd2b73-99a8-427d-a4bf-2648d7580be8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f: Get \"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f\": context canceled" logger="UnhandledError" Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:25.343873 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f: Get \\\"https://registry.k8s.io/v2/kube-state-metrics/kube-state-metrics/blobs/sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f\\\": context canceled\"" pod="openstack/kube-state-metrics-0" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:26.037557 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5bgbx" event={"ID":"f6152e32-f156-409d-99fb-13e07813a47e","Type":"ContainerStarted","Data":"2ea37afe0520d739f4f6577abed91b6e053c10765e8449a897ee6a6ebdeb071e"} Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:26.040610 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7a3e4fe3-b61e-4200-acf9-9ba170d68402","Type":"ContainerStarted","Data":"15edfd1666e77d0cebeb91324e37863fd13092f1497839db2cd84aa82275756a"} Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:26.045197 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78b988fb-f698-4b52-8771-2599b5441229","Type":"ContainerStarted","Data":"264de8e557cd8b7888a6e3c3caf9fe7057ee4c62a5ae54935c68c6a856347bbd"} Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:26.050625 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:26.553463 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="97b874a0-24c5-4c30-ae9b-33b380c5a99b" Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:26.565671 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-nb-db-server/blobs/sha256:98706c286da2c6fe28e9b8b1f32cd40bde3bda835fade711a62193fefd3575f7\\\": context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="56852f16-c116-4ec4-b22f-16952ac363b3" Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:26.772045 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87cc614-885a-440d-8310-bd22b599a383" path="/var/lib/kubelet/pods/d87cc614-885a-440d-8310-bd22b599a383/volumes" Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.063512 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"56852f16-c116-4ec4-b22f-16952ac363b3","Type":"ContainerStarted","Data":"d912e275b773d486c727143f8bf8de731368bcb14e600233bca544d62f1c7328"} Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.066743 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"97b874a0-24c5-4c30-ae9b-33b380c5a99b","Type":"ContainerStarted","Data":"7ab91203aa7661dcf0eacca39d722ba931c3d18c09a4b304d302a2ef17782241"} Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.068132 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5bgbx" event={"ID":"f6152e32-f156-409d-99fb-13e07813a47e","Type":"ContainerStarted","Data":"aa18e996257a6ce1948531d5b75cd838a64e07d519cb0a11152281110698630b"} Jan 26 15:58:27 crc kubenswrapper[4896]: E0126 15:58:27.069312 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="97b874a0-24c5-4c30-ae9b-33b380c5a99b" Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.144659 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5bgbx" podStartSLOduration=7.144634668 podStartE2EDuration="7.144634668s" podCreationTimestamp="2026-01-26 15:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:27.130563267 +0000 UTC m=+1464.912443680" watchObservedRunningTime="2026-01-26 15:58:27.144634668 +0000 UTC m=+1464.926515061" Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.861744 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:27 crc kubenswrapper[4896]: W0126 15:58:27.882990 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836d19fc_dc6e_4b0e_98c3_5599dabebb44.slice/crio-81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0 WatchSource:0}: Error finding container 81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0: Status 404 returned error can't find the container with id 81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0 Jan 26 15:58:27 crc kubenswrapper[4896]: I0126 15:58:27.979124 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:28 crc kubenswrapper[4896]: I0126 15:58:28.090630 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerStarted","Data":"758ae79e3f6e71ff84c487f54b312867f854bbbb4949ec8d0a1f4ca6a56ee855"} Jan 26 15:58:28 crc kubenswrapper[4896]: I0126 15:58:28.094548 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" event={"ID":"836d19fc-dc6e-4b0e-98c3-5599dabebb44","Type":"ContainerStarted","Data":"81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0"} Jan 26 15:58:28 crc kubenswrapper[4896]: I0126 15:58:28.111492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerStarted","Data":"5285cbca490498b0756067eefe09f89d22391cf62f12a87dd3c066307f0e869f"} Jan 26 15:58:28 crc kubenswrapper[4896]: I0126 15:58:28.121046 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerStarted","Data":"8ee4ce832a158d9875bc4598e8a7f21961800b70d3c724cc7128fbb12e1524fe"} Jan 26 15:58:28 crc kubenswrapper[4896]: I0126 15:58:28.125404 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerStarted","Data":"866e250b3dc594a32f2d37390a2c3e08821f48734dcc9202ca6c3e16478395fd"} Jan 26 15:58:28 crc kubenswrapper[4896]: E0126 15:58:28.127356 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="97b874a0-24c5-4c30-ae9b-33b380c5a99b" Jan 26 15:58:29 crc kubenswrapper[4896]: W0126 15:58:29.591214 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod309b8de4_298f_4828_9197_b06d2a7ddcf9.slice/crio-bb721a4faacf795fa60d6336fb2d1c711f7ef2be252e002bde2cba316d02376f WatchSource:0}: Error finding container bb721a4faacf795fa60d6336fb2d1c711f7ef2be252e002bde2cba316d02376f: Status 404 returned error can't find the container with id bb721a4faacf795fa60d6336fb2d1c711f7ef2be252e002bde2cba316d02376f Jan 26 15:58:30 crc kubenswrapper[4896]: I0126 15:58:30.158925 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" event={"ID":"309b8de4-298f-4828-9197-b06d2a7ddcf9","Type":"ContainerStarted","Data":"bb721a4faacf795fa60d6336fb2d1c711f7ef2be252e002bde2cba316d02376f"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.170789 4896 generic.go:334] "Generic (PLEG): container finished" podID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerID="7649f005c1e2b69498d72df567b2a649b94b45e4790cafcf34fac285ab6a49cc" exitCode=0 Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.170868 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" event={"ID":"309b8de4-298f-4828-9197-b06d2a7ddcf9","Type":"ContainerDied","Data":"7649f005c1e2b69498d72df567b2a649b94b45e4790cafcf34fac285ab6a49cc"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.175135 4896 generic.go:334] "Generic (PLEG): container finished" podID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerID="7e3eeeb878a0f01669a696db0f4389e95731f56a14db6f3fcd9ece9265043033" exitCode=0 Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.175295 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" event={"ID":"836d19fc-dc6e-4b0e-98c3-5599dabebb44","Type":"ContainerDied","Data":"7e3eeeb878a0f01669a696db0f4389e95731f56a14db6f3fcd9ece9265043033"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.177428 4896 generic.go:334] "Generic (PLEG): container finished" podID="97576f52-b567-4334-844b-bd9ae73a82b7" containerID="4c376f1f5982e00d4e6025be5df330e611374711cd301c8de980b021db00c1e0" exitCode=0 Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.177491 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlm9m" event={"ID":"97576f52-b567-4334-844b-bd9ae73a82b7","Type":"ContainerDied","Data":"4c376f1f5982e00d4e6025be5df330e611374711cd301c8de980b021db00c1e0"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.183571 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f","Type":"ContainerStarted","Data":"84ca7a0e96b50171fda20de71a0f5ac31fcc16e1f7b4a88c8be0b8cbc0c2d55a"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.184809 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.188676 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"56852f16-c116-4ec4-b22f-16952ac363b3","Type":"ContainerStarted","Data":"12febfa80e8870b09a1211f807fc9f6ca7c621071ea581127abd74743044afde"} Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.372983 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.375702 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.507383013 podStartE2EDuration="56.375682979s" podCreationTimestamp="2026-01-26 15:57:35 +0000 UTC" firstStartedPulling="2026-01-26 15:57:52.992492134 +0000 UTC m=+1430.774372527" lastFinishedPulling="2026-01-26 15:58:29.86079211 +0000 UTC m=+1467.642672493" observedRunningTime="2026-01-26 15:58:31.255529328 +0000 UTC m=+1469.037409731" watchObservedRunningTime="2026-01-26 15:58:31.375682979 +0000 UTC m=+1469.157563372" Jan 26 15:58:31 crc kubenswrapper[4896]: I0126 15:58:31.401829 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.499316858 podStartE2EDuration="1m1.401806971s" podCreationTimestamp="2026-01-26 15:57:30 +0000 UTC" firstStartedPulling="2026-01-26 15:57:31.481819376 +0000 UTC m=+1409.263699769" lastFinishedPulling="2026-01-26 15:58:30.384309479 +0000 UTC m=+1468.166189882" observedRunningTime="2026-01-26 15:58:31.393938981 +0000 UTC m=+1469.175819394" watchObservedRunningTime="2026-01-26 15:58:31.401806971 +0000 UTC m=+1469.183687364" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.201002 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlm9m" event={"ID":"97576f52-b567-4334-844b-bd9ae73a82b7","Type":"ContainerStarted","Data":"bcb1e971067a8def53a22b0ff1ee06d15abac6e8a785fafcd50be6b43483589e"} Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.201369 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlm9m" event={"ID":"97576f52-b567-4334-844b-bd9ae73a82b7","Type":"ContainerStarted","Data":"317f7d720e1748fff585c7b732143903046bfc146dd8928d8615e234de1464b8"} Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.201779 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.201855 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.203527 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" event={"ID":"561e12b6-a1fb-407f-ae57-6a28f00f9093","Type":"ContainerStarted","Data":"8b5c022041be6e16b8933b55a1c410a13d8ecbaafdf46d8f7623f1cdfb02fd56"} Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.205682 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" event={"ID":"309b8de4-298f-4828-9197-b06d2a7ddcf9","Type":"ContainerStarted","Data":"4a2d90d69e8945e964d3de1b70deaa03795abea100b435de502eb527f06d0ff1"} Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.205945 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.209228 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" event={"ID":"836d19fc-dc6e-4b0e-98c3-5599dabebb44","Type":"ContainerStarted","Data":"8f6d2d7d1630ec64151dba49488c4a8c070d1a03fc83b0b9640daaf25d6926f5"} Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.234551 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hlm9m" podStartSLOduration=19.355188434 podStartE2EDuration="56.234527508s" podCreationTimestamp="2026-01-26 15:57:36 +0000 UTC" firstStartedPulling="2026-01-26 15:57:52.983323791 +0000 UTC m=+1430.765204194" lastFinishedPulling="2026-01-26 15:58:29.862662865 +0000 UTC m=+1467.644543268" observedRunningTime="2026-01-26 15:58:32.225902599 +0000 UTC m=+1470.007783002" watchObservedRunningTime="2026-01-26 15:58:32.234527508 +0000 UTC m=+1470.016407901" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.253196 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" podStartSLOduration=10.69992202 podStartE2EDuration="11.253172649s" podCreationTimestamp="2026-01-26 15:58:21 +0000 UTC" firstStartedPulling="2026-01-26 15:58:29.60068645 +0000 UTC m=+1467.382566843" lastFinishedPulling="2026-01-26 15:58:30.153937079 +0000 UTC m=+1467.935817472" observedRunningTime="2026-01-26 15:58:32.242563113 +0000 UTC m=+1470.024443506" watchObservedRunningTime="2026-01-26 15:58:32.253172649 +0000 UTC m=+1470.035053042" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.263688 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-4s75s" podStartSLOduration=8.869205879 podStartE2EDuration="59.263660664s" podCreationTimestamp="2026-01-26 15:57:33 +0000 UTC" firstStartedPulling="2026-01-26 15:57:40.709882822 +0000 UTC m=+1418.491763215" lastFinishedPulling="2026-01-26 15:58:31.104337607 +0000 UTC m=+1468.886218000" observedRunningTime="2026-01-26 15:58:32.25565035 +0000 UTC m=+1470.037530763" watchObservedRunningTime="2026-01-26 15:58:32.263660664 +0000 UTC m=+1470.045541057" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.282937 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" podStartSLOduration=10.309743604 podStartE2EDuration="12.282918891s" podCreationTimestamp="2026-01-26 15:58:20 +0000 UTC" firstStartedPulling="2026-01-26 15:58:27.887650783 +0000 UTC m=+1465.669531176" lastFinishedPulling="2026-01-26 15:58:29.86082607 +0000 UTC m=+1467.642706463" observedRunningTime="2026-01-26 15:58:32.280308427 +0000 UTC m=+1470.062188820" watchObservedRunningTime="2026-01-26 15:58:32.282918891 +0000 UTC m=+1470.064799284" Jan 26 15:58:32 crc kubenswrapper[4896]: I0126 15:58:32.348371 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 15:58:33 crc kubenswrapper[4896]: I0126 15:58:33.218296 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:34 crc kubenswrapper[4896]: I0126 15:58:34.408252 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 15:58:35 crc kubenswrapper[4896]: I0126 15:58:35.293813 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 15:58:35 crc kubenswrapper[4896]: I0126 15:58:35.828788 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.253045 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf" event={"ID":"f24a2e9c-671f-48a5-a5f5-55b864b17d19","Type":"ContainerStarted","Data":"a6ef15b4a91282a8e599ce2e5bd5379cc30356466ac251335837ce2c00c5fd39"} Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.253628 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c9bzf" Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.255028 4896 generic.go:334] "Generic (PLEG): container finished" podID="78b988fb-f698-4b52-8771-2599b5441229" containerID="264de8e557cd8b7888a6e3c3caf9fe7057ee4c62a5ae54935c68c6a856347bbd" exitCode=0 Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.255083 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78b988fb-f698-4b52-8771-2599b5441229","Type":"ContainerDied","Data":"264de8e557cd8b7888a6e3c3caf9fe7057ee4c62a5ae54935c68c6a856347bbd"} Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.305316 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c9bzf" podStartSLOduration=5.670936085 podStartE2EDuration="1m0.305289518s" podCreationTimestamp="2026-01-26 15:57:36 +0000 UTC" firstStartedPulling="2026-01-26 15:57:41.084059201 +0000 UTC m=+1418.865939594" lastFinishedPulling="2026-01-26 15:58:35.718412634 +0000 UTC m=+1473.500293027" observedRunningTime="2026-01-26 15:58:36.298942344 +0000 UTC m=+1474.080822747" watchObservedRunningTime="2026-01-26 15:58:36.305289518 +0000 UTC m=+1474.087169911" Jan 26 15:58:36 crc kubenswrapper[4896]: I0126 15:58:36.570154 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.032837 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.094549 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.264974 4896 generic.go:334] "Generic (PLEG): container finished" podID="7a3e4fe3-b61e-4200-acf9-9ba170d68402" containerID="15edfd1666e77d0cebeb91324e37863fd13092f1497839db2cd84aa82275756a" exitCode=0 Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.265050 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7a3e4fe3-b61e-4200-acf9-9ba170d68402","Type":"ContainerDied","Data":"15edfd1666e77d0cebeb91324e37863fd13092f1497839db2cd84aa82275756a"} Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.270132 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78b988fb-f698-4b52-8771-2599b5441229","Type":"ContainerStarted","Data":"a8efdfc12a33df6ffd538c59585b66e7a30fa11b5639b8c27349e5b0215dad71"} Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.270514 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="dnsmasq-dns" containerID="cri-o://8f6d2d7d1630ec64151dba49488c4a8c070d1a03fc83b0b9640daaf25d6926f5" gracePeriod=10 Jan 26 15:58:37 crc kubenswrapper[4896]: I0126 15:58:37.344122 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371966.510677 podStartE2EDuration="1m10.344099235s" podCreationTimestamp="2026-01-26 15:57:27 +0000 UTC" firstStartedPulling="2026-01-26 15:57:30.351959208 +0000 UTC m=+1408.133839601" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:37.337737112 +0000 UTC m=+1475.119617505" watchObservedRunningTime="2026-01-26 15:58:37.344099235 +0000 UTC m=+1475.125979628" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.301930 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7a3e4fe3-b61e-4200-acf9-9ba170d68402","Type":"ContainerStarted","Data":"d36670cfb6ea4ae934a7268163c70f6446197b9244bdd24d41bc939c9117a2dc"} Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.306869 4896 generic.go:334] "Generic (PLEG): container finished" podID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerID="8f6d2d7d1630ec64151dba49488c4a8c070d1a03fc83b0b9640daaf25d6926f5" exitCode=0 Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.306948 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" event={"ID":"836d19fc-dc6e-4b0e-98c3-5599dabebb44","Type":"ContainerDied","Data":"8f6d2d7d1630ec64151dba49488c4a8c070d1a03fc83b0b9640daaf25d6926f5"} Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.308544 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" event={"ID":"836d19fc-dc6e-4b0e-98c3-5599dabebb44","Type":"ContainerDied","Data":"81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0"} Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.308573 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a152b637b118071a3a2315114fe5cd96a0d3f67641ad4cb204453404c732c0" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.319203 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.336487 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.118932535 podStartE2EDuration="1m9.33646167s" podCreationTimestamp="2026-01-26 15:57:29 +0000 UTC" firstStartedPulling="2026-01-26 15:57:31.310549101 +0000 UTC m=+1409.092429494" lastFinishedPulling="2026-01-26 15:58:16.528078236 +0000 UTC m=+1454.309958629" observedRunningTime="2026-01-26 15:58:38.332047043 +0000 UTC m=+1476.113927436" watchObservedRunningTime="2026-01-26 15:58:38.33646167 +0000 UTC m=+1476.118342063" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.402008 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc\") pod \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.402135 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45fzk\" (UniqueName: \"kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk\") pod \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.402185 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb\") pod \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.402390 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config\") pod \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\" (UID: \"836d19fc-dc6e-4b0e-98c3-5599dabebb44\") " Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.411697 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk" (OuterVolumeSpecName: "kube-api-access-45fzk") pod "836d19fc-dc6e-4b0e-98c3-5599dabebb44" (UID: "836d19fc-dc6e-4b0e-98c3-5599dabebb44"). InnerVolumeSpecName "kube-api-access-45fzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.458054 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "836d19fc-dc6e-4b0e-98c3-5599dabebb44" (UID: "836d19fc-dc6e-4b0e-98c3-5599dabebb44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.461100 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "836d19fc-dc6e-4b0e-98c3-5599dabebb44" (UID: "836d19fc-dc6e-4b0e-98c3-5599dabebb44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.470133 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config" (OuterVolumeSpecName: "config") pod "836d19fc-dc6e-4b0e-98c3-5599dabebb44" (UID: "836d19fc-dc6e-4b0e-98c3-5599dabebb44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.506342 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.506403 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45fzk\" (UniqueName: \"kubernetes.io/projected/836d19fc-dc6e-4b0e-98c3-5599dabebb44-kube-api-access-45fzk\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.506422 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:38 crc kubenswrapper[4896]: I0126 15:58:38.506435 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836d19fc-dc6e-4b0e-98c3-5599dabebb44-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:39 crc kubenswrapper[4896]: I0126 15:58:39.315244 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-f8swm" Jan 26 15:58:39 crc kubenswrapper[4896]: I0126 15:58:39.343621 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:39 crc kubenswrapper[4896]: I0126 15:58:39.352817 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-f8swm"] Jan 26 15:58:39 crc kubenswrapper[4896]: I0126 15:58:39.613795 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 15:58:39 crc kubenswrapper[4896]: I0126 15:58:39.613938 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 15:58:40 crc kubenswrapper[4896]: I0126 15:58:40.729798 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 15:58:40 crc kubenswrapper[4896]: I0126 15:58:40.730383 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 15:58:40 crc kubenswrapper[4896]: I0126 15:58:40.771694 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" path="/var/lib/kubelet/pods/836d19fc-dc6e-4b0e-98c3-5599dabebb44/volumes" Jan 26 15:58:41 crc kubenswrapper[4896]: I0126 15:58:41.335477 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"68bd2b73-99a8-427d-a4bf-2648d7580be8","Type":"ContainerStarted","Data":"86a55a27c1373ebace03ff2bab44b3b29b579e2f54a58f9bfb8d2f013fdbff97"} Jan 26 15:58:41 crc kubenswrapper[4896]: I0126 15:58:41.337244 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.187376 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.209446 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.256656794 podStartE2EDuration="1m10.209425308s" podCreationTimestamp="2026-01-26 15:57:32 +0000 UTC" firstStartedPulling="2026-01-26 15:57:40.335945587 +0000 UTC m=+1418.117825980" lastFinishedPulling="2026-01-26 15:58:40.288714101 +0000 UTC m=+1478.070594494" observedRunningTime="2026-01-26 15:58:41.355783684 +0000 UTC m=+1479.137664087" watchObservedRunningTime="2026-01-26 15:58:42.209425308 +0000 UTC m=+1479.991305701" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.270915 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.361318 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerStarted","Data":"cba41fd06ce57f906fe4acca16da8d2d6dde54a1f486dd87a9a6a8b54bd1526b"} Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.849431 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-5m2nv"] Jan 26 15:58:42 crc kubenswrapper[4896]: E0126 15:58:42.849940 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87cc614-885a-440d-8310-bd22b599a383" containerName="console" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.849968 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87cc614-885a-440d-8310-bd22b599a383" containerName="console" Jan 26 15:58:42 crc kubenswrapper[4896]: E0126 15:58:42.850034 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="dnsmasq-dns" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.850043 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="dnsmasq-dns" Jan 26 15:58:42 crc kubenswrapper[4896]: E0126 15:58:42.850054 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="init" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.850062 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="init" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.850312 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="836d19fc-dc6e-4b0e-98c3-5599dabebb44" containerName="dnsmasq-dns" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.850344 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87cc614-885a-440d-8310-bd22b599a383" containerName="console" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.855446 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.872436 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-5m2nv"] Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.937804 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.940390 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.978833 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.982345 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrt4z\" (UniqueName: \"kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:42 crc kubenswrapper[4896]: I0126 15:58:42.982526 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.061956 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-996c-account-create-update-qfp2k"] Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.063758 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.069404 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.071424 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086314 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086440 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086497 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086607 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrt4z\" (UniqueName: \"kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086696 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086807 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sc6n\" (UniqueName: \"kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.086842 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.090082 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.099460 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-996c-account-create-update-qfp2k"] Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.133005 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrt4z\" (UniqueName: \"kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z\") pod \"mysqld-exporter-openstack-db-create-5m2nv\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190016 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sc6n\" (UniqueName: \"kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190081 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190200 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190268 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190331 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgv76\" (UniqueName: \"kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190475 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.190527 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.192010 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.192691 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.192806 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.193527 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.194792 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.239021 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sc6n\" (UniqueName: \"kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n\") pod \"dnsmasq-dns-698758b865-hjhkk\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.372513 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.374024 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.374286 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgv76\" (UniqueName: \"kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.434969 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.462723 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.469865 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgv76\" (UniqueName: \"kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76\") pod \"mysqld-exporter-996c-account-create-update-qfp2k\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:43 crc kubenswrapper[4896]: I0126 15:58:43.712864 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.152380 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-5m2nv"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.352917 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.386266 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.387086 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.388074 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bbppj"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.389989 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.420088 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.420531 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dvcmq" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.420695 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.420817 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.420917 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.421208 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.421466 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.421985 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bbppj"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519670 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-cache\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519781 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519829 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3d7e7-114a-4790-ac11-1d5d191bdf40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519883 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519909 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519956 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.519991 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6xz\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-kube-api-access-6l6xz\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520034 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520082 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520110 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520196 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrm5\" (UniqueName: \"kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520267 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.520334 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-lock\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.522225 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.541984 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-996c-account-create-update-qfp2k"] Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.597512 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" event={"ID":"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c","Type":"ContainerStarted","Data":"556f73d6bc4d7f40c5e8674670b1cf35887daa8659e7324027b1f4a38891142e"} Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.605383 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hjhkk" event={"ID":"bf2859fd-5b7b-45fa-ae36-18244c995e05","Type":"ContainerStarted","Data":"fc191178ee55b9959a0d8ce035dd31e6e14afdd8fd4e4f4f95158825be5fe051"} Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.617407 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" event={"ID":"960a0ea6-5b48-4b87-9253-ca2c6d153b02","Type":"ContainerStarted","Data":"94e713aaf2a2e766dc1d30cbace00bb2462bf9047d8a0586979752b28ea4cf7e"} Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622565 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622653 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622718 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622765 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrm5\" (UniqueName: \"kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622815 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622855 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-lock\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622901 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-cache\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: E0126 15:58:44.622920 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:58:44 crc kubenswrapper[4896]: E0126 15:58:44.622962 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:58:44 crc kubenswrapper[4896]: E0126 15:58:44.623033 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:58:45.123009282 +0000 UTC m=+1482.904889755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.622942 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.623433 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3d7e7-114a-4790-ac11-1d5d191bdf40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.623550 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.623565 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.623617 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.624632 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-lock\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.624859 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.624950 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56f3d7e7-114a-4790-ac11-1d5d191bdf40-cache\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.625053 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.625107 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6xz\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-kube-api-access-6l6xz\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.625759 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.634310 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.635640 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.635711 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c00ed1564091ba526f2be2fb0163a54ca1ec1335f75d739d142a31e015df3a32/globalmount\"" pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.642600 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.642623 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3d7e7-114a-4790-ac11-1d5d191bdf40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.644252 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6xz\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-kube-api-access-6l6xz\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.648199 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrm5\" (UniqueName: \"kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.650384 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf\") pod \"swift-ring-rebalance-bbppj\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.722725 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d159f1c2-3822-4c15-823c-bff4ce136ffe\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:44 crc kubenswrapper[4896]: I0126 15:58:44.884172 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.140402 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:45 crc kubenswrapper[4896]: E0126 15:58:45.140796 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:58:45 crc kubenswrapper[4896]: E0126 15:58:45.140825 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:58:45 crc kubenswrapper[4896]: E0126 15:58:45.140887 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:58:46.140867334 +0000 UTC m=+1483.922747727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.394632 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bbppj"] Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.627673 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bbppj" event={"ID":"ff5abeb5-5a6e-48b2-920f-fb1a55c83023","Type":"ContainerStarted","Data":"9f158b9568376d915f4a8de983d7d0075bbd6bffff182c8a2b9f7f3bc9c352d9"} Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.630174 4896 generic.go:334] "Generic (PLEG): container finished" podID="96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" containerID="acecf0179040ec537f822df704883ead9f17c823a898c9c0e776878380f580a5" exitCode=0 Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.630365 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" event={"ID":"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c","Type":"ContainerDied","Data":"acecf0179040ec537f822df704883ead9f17c823a898c9c0e776878380f580a5"} Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.633367 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"97b874a0-24c5-4c30-ae9b-33b380c5a99b","Type":"ContainerStarted","Data":"90de82414a2eb15546caa07468b646218eb727986b6104c357a97358d4558898"} Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.636593 4896 generic.go:334] "Generic (PLEG): container finished" podID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerID="26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a" exitCode=0 Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.636665 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hjhkk" event={"ID":"bf2859fd-5b7b-45fa-ae36-18244c995e05","Type":"ContainerDied","Data":"26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a"} Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.641355 4896 generic.go:334] "Generic (PLEG): container finished" podID="960a0ea6-5b48-4b87-9253-ca2c6d153b02" containerID="bbd4d0c3c4cf96143c5770748dfd3d05a9e82985a5de9a599ffbeb532d588a30" exitCode=0 Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.641413 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" event={"ID":"960a0ea6-5b48-4b87-9253-ca2c6d153b02","Type":"ContainerDied","Data":"bbd4d0c3c4cf96143c5770748dfd3d05a9e82985a5de9a599ffbeb532d588a30"} Jan 26 15:58:45 crc kubenswrapper[4896]: I0126 15:58:45.721979 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.386525727 podStartE2EDuration="1m7.721956537s" podCreationTimestamp="2026-01-26 15:57:38 +0000 UTC" firstStartedPulling="2026-01-26 15:57:52.247825229 +0000 UTC m=+1430.029705632" lastFinishedPulling="2026-01-26 15:58:44.583256049 +0000 UTC m=+1482.365136442" observedRunningTime="2026-01-26 15:58:45.719325113 +0000 UTC m=+1483.501205506" watchObservedRunningTime="2026-01-26 15:58:45.721956537 +0000 UTC m=+1483.503836930" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.148365 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vzv62"] Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.150362 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.165889 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:46 crc kubenswrapper[4896]: E0126 15:58:46.166150 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:58:46 crc kubenswrapper[4896]: E0126 15:58:46.166195 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:58:46 crc kubenswrapper[4896]: E0126 15:58:46.166278 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:58:48.166253167 +0000 UTC m=+1485.948133560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.184350 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3dea-account-create-update-8rw6p"] Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.186118 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.200490 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.208555 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vzv62"] Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.223701 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3dea-account-create-update-8rw6p"] Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.269095 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.269168 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcsfr\" (UniqueName: \"kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.269216 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgpq\" (UniqueName: \"kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.269629 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.374121 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.374330 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcsfr\" (UniqueName: \"kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.374433 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgpq\" (UniqueName: \"kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.374675 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.375190 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.376199 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.417435 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcsfr\" (UniqueName: \"kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr\") pod \"glance-3dea-account-create-update-8rw6p\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.432661 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgpq\" (UniqueName: \"kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq\") pod \"glance-db-create-vzv62\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.484506 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vzv62" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.518334 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.624631 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.673405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hjhkk" event={"ID":"bf2859fd-5b7b-45fa-ae36-18244c995e05","Type":"ContainerStarted","Data":"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e"} Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.674110 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:46 crc kubenswrapper[4896]: I0126 15:58:46.709941 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hjhkk" podStartSLOduration=4.709920745 podStartE2EDuration="4.709920745s" podCreationTimestamp="2026-01-26 15:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:46.706810839 +0000 UTC m=+1484.488691232" watchObservedRunningTime="2026-01-26 15:58:46.709920745 +0000 UTC m=+1484.491801138" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.242045 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vzv62"] Jan 26 15:58:47 crc kubenswrapper[4896]: W0126 15:58:47.270675 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49d9681a_9fc0_4e0e_9d65_637d402f4c30.slice/crio-d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780 WatchSource:0}: Error finding container d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780: Status 404 returned error can't find the container with id d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780 Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.380230 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3dea-account-create-update-8rw6p"] Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.598702 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.609345 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.682318 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3dea-account-create-update-8rw6p" event={"ID":"23a5ea00-af46-46fb-a058-05504ad72b95","Type":"ContainerStarted","Data":"130437caa5406c8e049eb70e800db81d0323f014036ad23e0e6e7d8b2c2c6d9a"} Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.685234 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vzv62" event={"ID":"49d9681a-9fc0-4e0e-9d65-637d402f4c30","Type":"ContainerStarted","Data":"2b2a1864daa92e7d43f92e51a3e9bdfd27d6b26439f5420a9fbb9c6506ce0257"} Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.685283 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vzv62" event={"ID":"49d9681a-9fc0-4e0e-9d65-637d402f4c30","Type":"ContainerStarted","Data":"d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780"} Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.687548 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" event={"ID":"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c","Type":"ContainerDied","Data":"556f73d6bc4d7f40c5e8674670b1cf35887daa8659e7324027b1f4a38891142e"} Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.687640 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556f73d6bc4d7f40c5e8674670b1cf35887daa8659e7324027b1f4a38891142e" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.687571 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-5m2nv" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.692893 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" event={"ID":"960a0ea6-5b48-4b87-9253-ca2c6d153b02","Type":"ContainerDied","Data":"94e713aaf2a2e766dc1d30cbace00bb2462bf9047d8a0586979752b28ea4cf7e"} Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.692920 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-996c-account-create-update-qfp2k" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.692947 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e713aaf2a2e766dc1d30cbace00bb2462bf9047d8a0586979752b28ea4cf7e" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.711974 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgv76\" (UniqueName: \"kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76\") pod \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.712179 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts\") pod \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\" (UID: \"960a0ea6-5b48-4b87-9253-ca2c6d153b02\") " Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.712346 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrt4z\" (UniqueName: \"kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z\") pod \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.712402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts\") pod \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\" (UID: \"96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c\") " Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.713903 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "960a0ea6-5b48-4b87-9253-ca2c6d153b02" (UID: "960a0ea6-5b48-4b87-9253-ca2c6d153b02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.726598 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" (UID: "96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.746875 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z" (OuterVolumeSpecName: "kube-api-access-vrt4z") pod "96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" (UID: "96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c"). InnerVolumeSpecName "kube-api-access-vrt4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.799729 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76" (OuterVolumeSpecName: "kube-api-access-mgv76") pod "960a0ea6-5b48-4b87-9253-ca2c6d153b02" (UID: "960a0ea6-5b48-4b87-9253-ca2c6d153b02"). InnerVolumeSpecName "kube-api-access-mgv76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.821734 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgv76\" (UniqueName: \"kubernetes.io/projected/960a0ea6-5b48-4b87-9253-ca2c6d153b02-kube-api-access-mgv76\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.821774 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960a0ea6-5b48-4b87-9253-ca2c6d153b02-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.821785 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrt4z\" (UniqueName: \"kubernetes.io/projected/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-kube-api-access-vrt4z\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.821794 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.913601 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sbkmx"] Jan 26 15:58:47 crc kubenswrapper[4896]: E0126 15:58:47.914255 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960a0ea6-5b48-4b87-9253-ca2c6d153b02" containerName="mariadb-account-create-update" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.914286 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="960a0ea6-5b48-4b87-9253-ca2c6d153b02" containerName="mariadb-account-create-update" Jan 26 15:58:47 crc kubenswrapper[4896]: E0126 15:58:47.914331 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" containerName="mariadb-database-create" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.914339 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" containerName="mariadb-database-create" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.914630 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="960a0ea6-5b48-4b87-9253-ca2c6d153b02" containerName="mariadb-account-create-update" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.914673 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" containerName="mariadb-database-create" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.915668 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.917996 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:58:47 crc kubenswrapper[4896]: I0126 15:58:47.938067 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbkmx"] Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.027123 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4xx\" (UniqueName: \"kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.027425 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.131552 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.131948 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4xx\" (UniqueName: \"kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.133039 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.149335 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4xx\" (UniqueName: \"kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx\") pod \"root-account-create-update-sbkmx\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.234010 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:48 crc kubenswrapper[4896]: E0126 15:58:48.234492 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:58:48 crc kubenswrapper[4896]: E0126 15:58:48.234522 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:58:48 crc kubenswrapper[4896]: E0126 15:58:48.234598 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:58:52.2345646 +0000 UTC m=+1490.016444993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.249033 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.728904 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vzv62" podStartSLOduration=2.7288848420000003 podStartE2EDuration="2.728884842s" podCreationTimestamp="2026-01-26 15:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:48.721685866 +0000 UTC m=+1486.503566269" watchObservedRunningTime="2026-01-26 15:58:48.728884842 +0000 UTC m=+1486.510765225" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.813288 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.813368 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.813427 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.814501 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:58:48 crc kubenswrapper[4896]: I0126 15:58:48.814601 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b" gracePeriod=600 Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.664776 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.665290 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.716132 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3dea-account-create-update-8rw6p" event={"ID":"23a5ea00-af46-46fb-a058-05504ad72b95","Type":"ContainerStarted","Data":"8724dcfe44d9456fb09eda6a5c4ef5cfd8f08fcdf21528937126bd9990dacd1d"} Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.717234 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.722002 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b" exitCode=0 Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.722064 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b"} Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.722105 4896 scope.go:117] "RemoveContainer" containerID="6d22d796865491856408b3b04b5cee06c82fc1e5c08ee0eac7e9beca91027529" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.749148 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3dea-account-create-update-8rw6p" podStartSLOduration=3.74912497 podStartE2EDuration="3.74912497s" podCreationTimestamp="2026-01-26 15:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:49.73175944 +0000 UTC m=+1487.513639843" watchObservedRunningTime="2026-01-26 15:58:49.74912497 +0000 UTC m=+1487.531005363" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.918661 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.924399 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.931179 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.931778 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7lrrm" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.931963 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.932632 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.958794 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.979173 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.979305 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-scripts\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.979337 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg69w\" (UniqueName: \"kubernetes.io/projected/7600dca1-3435-4fcf-aab5-54c683d3ac33-kube-api-access-kg69w\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.979743 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.979931 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.980305 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:49 crc kubenswrapper[4896]: I0126 15:58:49.980374 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-config\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082186 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082285 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-scripts\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082305 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg69w\" (UniqueName: \"kubernetes.io/projected/7600dca1-3435-4fcf-aab5-54c683d3ac33-kube-api-access-kg69w\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082349 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082413 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082530 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.082562 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-config\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.083255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.085120 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-config\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.085379 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7600dca1-3435-4fcf-aab5-54c683d3ac33-scripts\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.088651 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.092198 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.100430 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600dca1-3435-4fcf-aab5-54c683d3ac33-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.102086 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg69w\" (UniqueName: \"kubernetes.io/projected/7600dca1-3435-4fcf-aab5-54c683d3ac33-kube-api-access-kg69w\") pod \"ovn-northd-0\" (UID: \"7600dca1-3435-4fcf-aab5-54c683d3ac33\") " pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.265075 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.370610 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-242n2"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.372493 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.383111 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-242n2"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.492305 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.492523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb68h\" (UniqueName: \"kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.507339 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b50a-account-create-update-dlv24"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.508980 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.510962 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.518535 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b50a-account-create-update-dlv24"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.594521 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp44l\" (UniqueName: \"kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.594663 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.594750 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb68h\" (UniqueName: \"kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.594782 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.595510 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.627983 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb68h\" (UniqueName: \"kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h\") pod \"keystone-db-create-242n2\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.698458 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.698685 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp44l\" (UniqueName: \"kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.699881 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.700607 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-242n2" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.723349 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nmtbk"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.726069 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.742844 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp44l\" (UniqueName: \"kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l\") pod \"keystone-b50a-account-create-update-dlv24\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.757550 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nmtbk"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.769130 4896 generic.go:334] "Generic (PLEG): container finished" podID="23a5ea00-af46-46fb-a058-05504ad72b95" containerID="8724dcfe44d9456fb09eda6a5c4ef5cfd8f08fcdf21528937126bd9990dacd1d" exitCode=0 Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.775955 4896 generic.go:334] "Generic (PLEG): container finished" podID="49d9681a-9fc0-4e0e-9d65-637d402f4c30" containerID="2b2a1864daa92e7d43f92e51a3e9bdfd27d6b26439f5420a9fbb9c6506ce0257" exitCode=0 Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.779840 4896 generic.go:334] "Generic (PLEG): container finished" podID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerID="cba41fd06ce57f906fe4acca16da8d2d6dde54a1f486dd87a9a6a8b54bd1526b" exitCode=0 Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.805044 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.805119 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.818541 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3dea-account-create-update-8rw6p" event={"ID":"23a5ea00-af46-46fb-a058-05504ad72b95","Type":"ContainerDied","Data":"8724dcfe44d9456fb09eda6a5c4ef5cfd8f08fcdf21528937126bd9990dacd1d"} Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.818675 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vzv62" event={"ID":"49d9681a-9fc0-4e0e-9d65-637d402f4c30","Type":"ContainerDied","Data":"2b2a1864daa92e7d43f92e51a3e9bdfd27d6b26439f5420a9fbb9c6506ce0257"} Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.818711 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerDied","Data":"cba41fd06ce57f906fe4acca16da8d2d6dde54a1f486dd87a9a6a8b54bd1526b"} Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.830564 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-27d9-account-create-update-52v2t"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.832813 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.835641 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.837842 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.841029 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-27d9-account-create-update-52v2t"] Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.907807 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.907919 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.907993 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.908083 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmcw\" (UniqueName: \"kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.909002 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:50 crc kubenswrapper[4896]: I0126 15:58:50.944228 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp\") pod \"placement-db-create-nmtbk\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.010630 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.010963 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmcw\" (UniqueName: \"kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.011379 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.028783 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmcw\" (UniqueName: \"kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw\") pod \"placement-27d9-account-create-update-52v2t\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.125157 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:51 crc kubenswrapper[4896]: I0126 15:58:51.160153 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.262925 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:58:52 crc kubenswrapper[4896]: E0126 15:58:52.263603 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:58:52 crc kubenswrapper[4896]: E0126 15:58:52.263640 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:58:52 crc kubenswrapper[4896]: E0126 15:58:52.263709 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:59:00.263685175 +0000 UTC m=+1498.045565578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.461883 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.582899 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.585102 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vzv62" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.673110 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts\") pod \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.673552 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kgpq\" (UniqueName: \"kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq\") pod \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\" (UID: \"49d9681a-9fc0-4e0e-9d65-637d402f4c30\") " Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.673643 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts\") pod \"23a5ea00-af46-46fb-a058-05504ad72b95\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.673795 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcsfr\" (UniqueName: \"kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr\") pod \"23a5ea00-af46-46fb-a058-05504ad72b95\" (UID: \"23a5ea00-af46-46fb-a058-05504ad72b95\") " Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.674297 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49d9681a-9fc0-4e0e-9d65-637d402f4c30" (UID: "49d9681a-9fc0-4e0e-9d65-637d402f4c30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.674553 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d9681a-9fc0-4e0e-9d65-637d402f4c30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.675664 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23a5ea00-af46-46fb-a058-05504ad72b95" (UID: "23a5ea00-af46-46fb-a058-05504ad72b95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.681405 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr" (OuterVolumeSpecName: "kube-api-access-hcsfr") pod "23a5ea00-af46-46fb-a058-05504ad72b95" (UID: "23a5ea00-af46-46fb-a058-05504ad72b95"). InnerVolumeSpecName "kube-api-access-hcsfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.682918 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq" (OuterVolumeSpecName: "kube-api-access-7kgpq") pod "49d9681a-9fc0-4e0e-9d65-637d402f4c30" (UID: "49d9681a-9fc0-4e0e-9d65-637d402f4c30"). InnerVolumeSpecName "kube-api-access-7kgpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.738085 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.777227 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcsfr\" (UniqueName: \"kubernetes.io/projected/23a5ea00-af46-46fb-a058-05504ad72b95-kube-api-access-hcsfr\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.777267 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kgpq\" (UniqueName: \"kubernetes.io/projected/49d9681a-9fc0-4e0e-9d65-637d402f4c30-kube-api-access-7kgpq\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.777278 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a5ea00-af46-46fb-a058-05504ad72b95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.831460 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb"} Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.839554 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3dea-account-create-update-8rw6p" event={"ID":"23a5ea00-af46-46fb-a058-05504ad72b95","Type":"ContainerDied","Data":"130437caa5406c8e049eb70e800db81d0323f014036ad23e0e6e7d8b2c2c6d9a"} Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.839773 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130437caa5406c8e049eb70e800db81d0323f014036ad23e0e6e7d8b2c2c6d9a" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.839619 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3dea-account-create-update-8rw6p" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.843742 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bbppj" event={"ID":"ff5abeb5-5a6e-48b2-920f-fb1a55c83023","Type":"ContainerStarted","Data":"08a081d9e3703ce0b91fedbeaa7a05b3c66d671954a2c29d0d0f8a7201b59221"} Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.847213 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vzv62" event={"ID":"49d9681a-9fc0-4e0e-9d65-637d402f4c30","Type":"ContainerDied","Data":"d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780"} Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.847248 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d808c60f429a90daa9a15a8e41aa8cc41408284406305e17b636c602dbb35780" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.847383 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vzv62" Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.867053 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7600dca1-3435-4fcf-aab5-54c683d3ac33","Type":"ContainerStarted","Data":"2bc15c410279ac462f57e8103607208e83fa58f340caa2e9f0f7705003a097e3"} Jan 26 15:58:52 crc kubenswrapper[4896]: I0126 15:58:52.913112 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bbppj" podStartSLOduration=2.469361591 podStartE2EDuration="8.913085534s" podCreationTimestamp="2026-01-26 15:58:44 +0000 UTC" firstStartedPulling="2026-01-26 15:58:45.402168803 +0000 UTC m=+1483.184049196" lastFinishedPulling="2026-01-26 15:58:51.845892746 +0000 UTC m=+1489.627773139" observedRunningTime="2026-01-26 15:58:52.903467986 +0000 UTC m=+1490.685348379" watchObservedRunningTime="2026-01-26 15:58:52.913085534 +0000 UTC m=+1490.694965937" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.044715 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nmtbk"] Jan 26 15:58:53 crc kubenswrapper[4896]: W0126 15:58:53.055307 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972e84d7_224b_4b26_b685_9f822ba2d13e.slice/crio-5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d WatchSource:0}: Error finding container 5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d: Status 404 returned error can't find the container with id 5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.082485 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-242n2"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.110148 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b50a-account-create-update-dlv24"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.123662 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbkmx"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.136889 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-27d9-account-create-update-52v2t"] Jan 26 15:58:53 crc kubenswrapper[4896]: W0126 15:58:53.137137 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706a0a95_3212_477f_98de_61846e43ef58.slice/crio-5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd WatchSource:0}: Error finding container 5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd: Status 404 returned error can't find the container with id 5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.373780 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.380826 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd"] Jan 26 15:58:53 crc kubenswrapper[4896]: E0126 15:58:53.382209 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d9681a-9fc0-4e0e-9d65-637d402f4c30" containerName="mariadb-database-create" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.382314 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d9681a-9fc0-4e0e-9d65-637d402f4c30" containerName="mariadb-database-create" Jan 26 15:58:53 crc kubenswrapper[4896]: E0126 15:58:53.382391 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a5ea00-af46-46fb-a058-05504ad72b95" containerName="mariadb-account-create-update" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.382459 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a5ea00-af46-46fb-a058-05504ad72b95" containerName="mariadb-account-create-update" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.382925 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a5ea00-af46-46fb-a058-05504ad72b95" containerName="mariadb-account-create-update" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.383037 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d9681a-9fc0-4e0e-9d65-637d402f4c30" containerName="mariadb-database-create" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.385668 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.399702 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.530917 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.531490 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="dnsmasq-dns" containerID="cri-o://4a2d90d69e8945e964d3de1b70deaa03795abea100b435de502eb527f06d0ff1" gracePeriod=10 Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.533655 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.534078 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplsc\" (UniqueName: \"kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.591802 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-954b-account-create-update-wwrsc"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.594851 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.597986 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.619835 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-954b-account-create-update-wwrsc"] Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.636102 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplsc\" (UniqueName: \"kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.636407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.637162 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.667268 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplsc\" (UniqueName: \"kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc\") pod \"mysqld-exporter-openstack-cell1-db-create-fsqhd\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.749482 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqkgp\" (UniqueName: \"kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.749659 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.783207 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.851467 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqkgp\" (UniqueName: \"kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.851653 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.858170 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.906867 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqkgp\" (UniqueName: \"kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp\") pod \"mysqld-exporter-954b-account-create-update-wwrsc\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.921291 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nmtbk" event={"ID":"972e84d7-224b-4b26-b685-9f822ba2d13e","Type":"ContainerStarted","Data":"852b75e712e81bb34979c97a363b414bf1021386a80d18adb28c121360350bc3"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.921350 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nmtbk" event={"ID":"972e84d7-224b-4b26-b685-9f822ba2d13e","Type":"ContainerStarted","Data":"5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.933716 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-242n2" event={"ID":"45b87b14-02cf-4779-b5ad-964c37378a78","Type":"ContainerStarted","Data":"48da29867d18e455f26382ec66a50212d92a4f42b472a927957177e965b1e9b6"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.933789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-242n2" event={"ID":"45b87b14-02cf-4779-b5ad-964c37378a78","Type":"ContainerStarted","Data":"0ede908138d5b3254a9638a2a027c5ca43b2abab8eb82433a21e96d25554fb85"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.946788 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbkmx" event={"ID":"706a0a95-3212-477f-98de-61846e43ef58","Type":"ContainerStarted","Data":"5d9f5da787d2d37ae3c5f239757b1421a756565fff5329c584d58e3ced4308ec"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.946850 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbkmx" event={"ID":"706a0a95-3212-477f-98de-61846e43ef58","Type":"ContainerStarted","Data":"5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.973443 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-27d9-account-create-update-52v2t" event={"ID":"977332a1-d90d-45ee-b202-c8cbfa2b5ab9","Type":"ContainerStarted","Data":"03be9ffa1da36563734bea6e02840027fc2d5a9be20a1c822a4916150d0ef59a"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.973508 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-27d9-account-create-update-52v2t" event={"ID":"977332a1-d90d-45ee-b202-c8cbfa2b5ab9","Type":"ContainerStarted","Data":"deea4cda0a012b2697110b1ed2e49eb7a2e35ee74c8347f0d630c0d4385df890"} Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.978712 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nmtbk" podStartSLOduration=3.97856918 podStartE2EDuration="3.97856918s" podCreationTimestamp="2026-01-26 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:53.961371766 +0000 UTC m=+1491.743252159" watchObservedRunningTime="2026-01-26 15:58:53.97856918 +0000 UTC m=+1491.760449573" Jan 26 15:58:53 crc kubenswrapper[4896]: I0126 15:58:53.987442 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.005075 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b50a-account-create-update-dlv24" event={"ID":"d0b4fb8a-de2a-43d2-b233-6ff17febcd58","Type":"ContainerStarted","Data":"61001a25fea435b71b3c3679461a1f58e71b3944f65b322c53e125d6d33580c2"} Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.005138 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b50a-account-create-update-dlv24" event={"ID":"d0b4fb8a-de2a-43d2-b233-6ff17febcd58","Type":"ContainerStarted","Data":"38fe8f61068ced34fe179ebada77492d64a13578faa2c5752e4cf288cf686f55"} Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.027080 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-242n2" podStartSLOduration=4.027054478 podStartE2EDuration="4.027054478s" podCreationTimestamp="2026-01-26 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:53.999307742 +0000 UTC m=+1491.781188135" watchObservedRunningTime="2026-01-26 15:58:54.027054478 +0000 UTC m=+1491.808934871" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.032340 4896 generic.go:334] "Generic (PLEG): container finished" podID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerID="4a2d90d69e8945e964d3de1b70deaa03795abea100b435de502eb527f06d0ff1" exitCode=0 Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.032404 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" event={"ID":"309b8de4-298f-4828-9197-b06d2a7ddcf9","Type":"ContainerDied","Data":"4a2d90d69e8945e964d3de1b70deaa03795abea100b435de502eb527f06d0ff1"} Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.043820 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sbkmx" podStartSLOduration=7.043794711 podStartE2EDuration="7.043794711s" podCreationTimestamp="2026-01-26 15:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:54.015563973 +0000 UTC m=+1491.797444376" watchObservedRunningTime="2026-01-26 15:58:54.043794711 +0000 UTC m=+1491.825675104" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.054638 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-27d9-account-create-update-52v2t" podStartSLOduration=4.054615018 podStartE2EDuration="4.054615018s" podCreationTimestamp="2026-01-26 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:54.039945566 +0000 UTC m=+1491.821825959" watchObservedRunningTime="2026-01-26 15:58:54.054615018 +0000 UTC m=+1491.836495411" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.070444 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b50a-account-create-update-dlv24" podStartSLOduration=4.070417549 podStartE2EDuration="4.070417549s" podCreationTimestamp="2026-01-26 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:58:54.066363699 +0000 UTC m=+1491.848244092" watchObservedRunningTime="2026-01-26 15:58:54.070417549 +0000 UTC m=+1491.852297942" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.386211 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.472972 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb\") pod \"309b8de4-298f-4828-9197-b06d2a7ddcf9\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.473282 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb\") pod \"309b8de4-298f-4828-9197-b06d2a7ddcf9\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.473340 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc\") pod \"309b8de4-298f-4828-9197-b06d2a7ddcf9\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.473435 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config\") pod \"309b8de4-298f-4828-9197-b06d2a7ddcf9\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.473542 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj5x\" (UniqueName: \"kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x\") pod \"309b8de4-298f-4828-9197-b06d2a7ddcf9\" (UID: \"309b8de4-298f-4828-9197-b06d2a7ddcf9\") " Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.484391 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x" (OuterVolumeSpecName: "kube-api-access-vrj5x") pod "309b8de4-298f-4828-9197-b06d2a7ddcf9" (UID: "309b8de4-298f-4828-9197-b06d2a7ddcf9"). InnerVolumeSpecName "kube-api-access-vrj5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.529872 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "309b8de4-298f-4828-9197-b06d2a7ddcf9" (UID: "309b8de4-298f-4828-9197-b06d2a7ddcf9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.537944 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "309b8de4-298f-4828-9197-b06d2a7ddcf9" (UID: "309b8de4-298f-4828-9197-b06d2a7ddcf9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.541504 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "309b8de4-298f-4828-9197-b06d2a7ddcf9" (UID: "309b8de4-298f-4828-9197-b06d2a7ddcf9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.544042 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config" (OuterVolumeSpecName: "config") pod "309b8de4-298f-4828-9197-b06d2a7ddcf9" (UID: "309b8de4-298f-4828-9197-b06d2a7ddcf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.583061 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.583387 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.583408 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.583428 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj5x\" (UniqueName: \"kubernetes.io/projected/309b8de4-298f-4828-9197-b06d2a7ddcf9-kube-api-access-vrj5x\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.583448 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/309b8de4-298f-4828-9197-b06d2a7ddcf9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:54 crc kubenswrapper[4896]: W0126 15:58:54.616057 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb04cb5_685c_45a2_aa8e_4af430329a31.slice/crio-22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659 WatchSource:0}: Error finding container 22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659: Status 404 returned error can't find the container with id 22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659 Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.616548 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd"] Jan 26 15:58:54 crc kubenswrapper[4896]: I0126 15:58:54.787892 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-954b-account-create-update-wwrsc"] Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.044646 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" event={"ID":"4fb04cb5-685c-45a2-aa8e-4af430329a31","Type":"ContainerStarted","Data":"22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.049408 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b50a-account-create-update-dlv24" event={"ID":"d0b4fb8a-de2a-43d2-b233-6ff17febcd58","Type":"ContainerDied","Data":"61001a25fea435b71b3c3679461a1f58e71b3944f65b322c53e125d6d33580c2"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.049697 4896 generic.go:334] "Generic (PLEG): container finished" podID="d0b4fb8a-de2a-43d2-b233-6ff17febcd58" containerID="61001a25fea435b71b3c3679461a1f58e71b3944f65b322c53e125d6d33580c2" exitCode=0 Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.062823 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.062885 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x6sn" event={"ID":"309b8de4-298f-4828-9197-b06d2a7ddcf9","Type":"ContainerDied","Data":"bb721a4faacf795fa60d6336fb2d1c711f7ef2be252e002bde2cba316d02376f"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.062965 4896 scope.go:117] "RemoveContainer" containerID="4a2d90d69e8945e964d3de1b70deaa03795abea100b435de502eb527f06d0ff1" Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.075338 4896 generic.go:334] "Generic (PLEG): container finished" podID="972e84d7-224b-4b26-b685-9f822ba2d13e" containerID="852b75e712e81bb34979c97a363b414bf1021386a80d18adb28c121360350bc3" exitCode=0 Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.075427 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nmtbk" event={"ID":"972e84d7-224b-4b26-b685-9f822ba2d13e","Type":"ContainerDied","Data":"852b75e712e81bb34979c97a363b414bf1021386a80d18adb28c121360350bc3"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.088093 4896 generic.go:334] "Generic (PLEG): container finished" podID="45b87b14-02cf-4779-b5ad-964c37378a78" containerID="48da29867d18e455f26382ec66a50212d92a4f42b472a927957177e965b1e9b6" exitCode=0 Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.088160 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-242n2" event={"ID":"45b87b14-02cf-4779-b5ad-964c37378a78","Type":"ContainerDied","Data":"48da29867d18e455f26382ec66a50212d92a4f42b472a927957177e965b1e9b6"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.095145 4896 generic.go:334] "Generic (PLEG): container finished" podID="706a0a95-3212-477f-98de-61846e43ef58" containerID="5d9f5da787d2d37ae3c5f239757b1421a756565fff5329c584d58e3ced4308ec" exitCode=0 Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.095231 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbkmx" event={"ID":"706a0a95-3212-477f-98de-61846e43ef58","Type":"ContainerDied","Data":"5d9f5da787d2d37ae3c5f239757b1421a756565fff5329c584d58e3ced4308ec"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.098515 4896 generic.go:334] "Generic (PLEG): container finished" podID="977332a1-d90d-45ee-b202-c8cbfa2b5ab9" containerID="03be9ffa1da36563734bea6e02840027fc2d5a9be20a1c822a4916150d0ef59a" exitCode=0 Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.098607 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-27d9-account-create-update-52v2t" event={"ID":"977332a1-d90d-45ee-b202-c8cbfa2b5ab9","Type":"ContainerDied","Data":"03be9ffa1da36563734bea6e02840027fc2d5a9be20a1c822a4916150d0ef59a"} Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.311076 4896 scope.go:117] "RemoveContainer" containerID="7649f005c1e2b69498d72df567b2a649b94b45e4790cafcf34fac285ab6a49cc" Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.369844 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:55 crc kubenswrapper[4896]: I0126 15:58:55.384886 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x6sn"] Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.127564 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7600dca1-3435-4fcf-aab5-54c683d3ac33","Type":"ContainerStarted","Data":"59004a4c062bbdd588910c7683b01c96a674a94e92c96fbfc3ce6a7ba55a553a"} Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.128002 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7600dca1-3435-4fcf-aab5-54c683d3ac33","Type":"ContainerStarted","Data":"21dbc2a361f3cf905aef7edf7616b587960bebeca9c42807412419debdeccd41"} Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.128024 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.131461 4896 generic.go:334] "Generic (PLEG): container finished" podID="cc7fc4b8-f53b-4377-9023-76db2caec959" containerID="bde3ae5ca80eef2f5df83101503947e49a2ef6a2a51e4b79dcc366f92be0790e" exitCode=0 Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.131520 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" event={"ID":"cc7fc4b8-f53b-4377-9023-76db2caec959","Type":"ContainerDied","Data":"bde3ae5ca80eef2f5df83101503947e49a2ef6a2a51e4b79dcc366f92be0790e"} Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.131539 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" event={"ID":"cc7fc4b8-f53b-4377-9023-76db2caec959","Type":"ContainerStarted","Data":"dffeb74fb2360bc3497aabb40db651845f6464e63611a520adccc0fb06365f55"} Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.134612 4896 generic.go:334] "Generic (PLEG): container finished" podID="4fb04cb5-685c-45a2-aa8e-4af430329a31" containerID="c56ca6abba1d5d0eeb0b641db384f57386c67ad84f2b8ede39fd92eb954a6c72" exitCode=0 Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.134762 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" event={"ID":"4fb04cb5-685c-45a2-aa8e-4af430329a31","Type":"ContainerDied","Data":"c56ca6abba1d5d0eeb0b641db384f57386c67ad84f2b8ede39fd92eb954a6c72"} Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.163523 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.547983855 podStartE2EDuration="7.163492255s" podCreationTimestamp="2026-01-26 15:58:49 +0000 UTC" firstStartedPulling="2026-01-26 15:58:52.497706674 +0000 UTC m=+1490.279587067" lastFinishedPulling="2026-01-26 15:58:55.113215074 +0000 UTC m=+1492.895095467" observedRunningTime="2026-01-26 15:58:56.152943024 +0000 UTC m=+1493.934823437" watchObservedRunningTime="2026-01-26 15:58:56.163492255 +0000 UTC m=+1493.945372668" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.407806 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-c6rlw"] Jan 26 15:58:56 crc kubenswrapper[4896]: E0126 15:58:56.408682 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="init" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.408791 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="init" Jan 26 15:58:56 crc kubenswrapper[4896]: E0126 15:58:56.408891 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="dnsmasq-dns" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.408954 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="dnsmasq-dns" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.409275 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" containerName="dnsmasq-dns" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.410395 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.416024 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.416327 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m8j8z" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.427176 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-c6rlw"] Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.434745 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.435054 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.435566 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.436284 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52n8\" (UniqueName: \"kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.538278 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.538710 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k52n8\" (UniqueName: \"kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.538750 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.538899 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.546482 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.554554 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.554844 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.557902 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k52n8\" (UniqueName: \"kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8\") pod \"glance-db-sync-c6rlw\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.712414 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.736345 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-c6rlw" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.742659 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts\") pod \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.742989 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp44l\" (UniqueName: \"kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l\") pod \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\" (UID: \"d0b4fb8a-de2a-43d2-b233-6ff17febcd58\") " Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.746893 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0b4fb8a-de2a-43d2-b233-6ff17febcd58" (UID: "d0b4fb8a-de2a-43d2-b233-6ff17febcd58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.754922 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l" (OuterVolumeSpecName: "kube-api-access-vp44l") pod "d0b4fb8a-de2a-43d2-b233-6ff17febcd58" (UID: "d0b4fb8a-de2a-43d2-b233-6ff17febcd58"). InnerVolumeSpecName "kube-api-access-vp44l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.784113 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309b8de4-298f-4828-9197-b06d2a7ddcf9" path="/var/lib/kubelet/pods/309b8de4-298f-4828-9197-b06d2a7ddcf9/volumes" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.845953 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp44l\" (UniqueName: \"kubernetes.io/projected/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-kube-api-access-vp44l\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:56 crc kubenswrapper[4896]: I0126 15:58:56.846005 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0b4fb8a-de2a-43d2-b233-6ff17febcd58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.164419 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b50a-account-create-update-dlv24" event={"ID":"d0b4fb8a-de2a-43d2-b233-6ff17febcd58","Type":"ContainerDied","Data":"38fe8f61068ced34fe179ebada77492d64a13578faa2c5752e4cf288cf686f55"} Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.165132 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38fe8f61068ced34fe179ebada77492d64a13578faa2c5752e4cf288cf686f55" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.165222 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b50a-account-create-update-dlv24" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.173241 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nmtbk" event={"ID":"972e84d7-224b-4b26-b685-9f822ba2d13e","Type":"ContainerDied","Data":"5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d"} Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.173296 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e0c5de3b1a1009b7e6359e029d767099478cc2fce9fc493265b9876d498450d" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.177713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-242n2" event={"ID":"45b87b14-02cf-4779-b5ad-964c37378a78","Type":"ContainerDied","Data":"0ede908138d5b3254a9638a2a027c5ca43b2abab8eb82433a21e96d25554fb85"} Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.177765 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ede908138d5b3254a9638a2a027c5ca43b2abab8eb82433a21e96d25554fb85" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.183111 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.190923 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbkmx" event={"ID":"706a0a95-3212-477f-98de-61846e43ef58","Type":"ContainerDied","Data":"5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd"} Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.190975 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fac6e76ed8b8049c4eede8afa87a3c90d871c8c02144f83ef6b7b6329f153cd" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.194275 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-27d9-account-create-update-52v2t" event={"ID":"977332a1-d90d-45ee-b202-c8cbfa2b5ab9","Type":"ContainerDied","Data":"deea4cda0a012b2697110b1ed2e49eb7a2e35ee74c8347f0d630c0d4385df890"} Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.194313 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deea4cda0a012b2697110b1ed2e49eb7a2e35ee74c8347f0d630c0d4385df890" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.208389 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.218695 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-242n2" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.248985 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts\") pod \"706a0a95-3212-477f-98de-61846e43ef58\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263500 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts\") pod \"45b87b14-02cf-4779-b5ad-964c37378a78\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263561 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb68h\" (UniqueName: \"kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h\") pod \"45b87b14-02cf-4779-b5ad-964c37378a78\" (UID: \"45b87b14-02cf-4779-b5ad-964c37378a78\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263691 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts\") pod \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263766 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4xx\" (UniqueName: \"kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx\") pod \"706a0a95-3212-477f-98de-61846e43ef58\" (UID: \"706a0a95-3212-477f-98de-61846e43ef58\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.263803 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gmcw\" (UniqueName: \"kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw\") pod \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\" (UID: \"977332a1-d90d-45ee-b202-c8cbfa2b5ab9\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.264436 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "977332a1-d90d-45ee-b202-c8cbfa2b5ab9" (UID: "977332a1-d90d-45ee-b202-c8cbfa2b5ab9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.264458 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45b87b14-02cf-4779-b5ad-964c37378a78" (UID: "45b87b14-02cf-4779-b5ad-964c37378a78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.268210 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "706a0a95-3212-477f-98de-61846e43ef58" (UID: "706a0a95-3212-477f-98de-61846e43ef58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.271564 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h" (OuterVolumeSpecName: "kube-api-access-hb68h") pod "45b87b14-02cf-4779-b5ad-964c37378a78" (UID: "45b87b14-02cf-4779-b5ad-964c37378a78"). InnerVolumeSpecName "kube-api-access-hb68h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.272937 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw" (OuterVolumeSpecName: "kube-api-access-2gmcw") pod "977332a1-d90d-45ee-b202-c8cbfa2b5ab9" (UID: "977332a1-d90d-45ee-b202-c8cbfa2b5ab9"). InnerVolumeSpecName "kube-api-access-2gmcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.273327 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx" (OuterVolumeSpecName: "kube-api-access-tg4xx") pod "706a0a95-3212-477f-98de-61846e43ef58" (UID: "706a0a95-3212-477f-98de-61846e43ef58"). InnerVolumeSpecName "kube-api-access-tg4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.368692 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts\") pod \"972e84d7-224b-4b26-b685-9f822ba2d13e\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.369181 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp\") pod \"972e84d7-224b-4b26-b685-9f822ba2d13e\" (UID: \"972e84d7-224b-4b26-b685-9f822ba2d13e\") " Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.369762 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "972e84d7-224b-4b26-b685-9f822ba2d13e" (UID: "972e84d7-224b-4b26-b685-9f822ba2d13e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372431 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4xx\" (UniqueName: \"kubernetes.io/projected/706a0a95-3212-477f-98de-61846e43ef58-kube-api-access-tg4xx\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372468 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gmcw\" (UniqueName: \"kubernetes.io/projected/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-kube-api-access-2gmcw\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372484 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/706a0a95-3212-477f-98de-61846e43ef58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372498 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45b87b14-02cf-4779-b5ad-964c37378a78-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372511 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/972e84d7-224b-4b26-b685-9f822ba2d13e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372523 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb68h\" (UniqueName: \"kubernetes.io/projected/45b87b14-02cf-4779-b5ad-964c37378a78-kube-api-access-hb68h\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.372537 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977332a1-d90d-45ee-b202-c8cbfa2b5ab9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.374752 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp" (OuterVolumeSpecName: "kube-api-access-67kbp") pod "972e84d7-224b-4b26-b685-9f822ba2d13e" (UID: "972e84d7-224b-4b26-b685-9f822ba2d13e"). InnerVolumeSpecName "kube-api-access-67kbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.474714 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/972e84d7-224b-4b26-b685-9f822ba2d13e-kube-api-access-67kbp\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.635676 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-c6rlw"] Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.975221 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:57 crc kubenswrapper[4896]: I0126 15:58:57.990907 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.099798 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts\") pod \"4fb04cb5-685c-45a2-aa8e-4af430329a31\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.099975 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqkgp\" (UniqueName: \"kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp\") pod \"cc7fc4b8-f53b-4377-9023-76db2caec959\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.100087 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplsc\" (UniqueName: \"kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc\") pod \"4fb04cb5-685c-45a2-aa8e-4af430329a31\" (UID: \"4fb04cb5-685c-45a2-aa8e-4af430329a31\") " Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.100113 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts\") pod \"cc7fc4b8-f53b-4377-9023-76db2caec959\" (UID: \"cc7fc4b8-f53b-4377-9023-76db2caec959\") " Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.102250 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc7fc4b8-f53b-4377-9023-76db2caec959" (UID: "cc7fc4b8-f53b-4377-9023-76db2caec959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.116223 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fb04cb5-685c-45a2-aa8e-4af430329a31" (UID: "4fb04cb5-685c-45a2-aa8e-4af430329a31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.133142 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc" (OuterVolumeSpecName: "kube-api-access-xplsc") pod "4fb04cb5-685c-45a2-aa8e-4af430329a31" (UID: "4fb04cb5-685c-45a2-aa8e-4af430329a31"). InnerVolumeSpecName "kube-api-access-xplsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.136947 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp" (OuterVolumeSpecName: "kube-api-access-mqkgp") pod "cc7fc4b8-f53b-4377-9023-76db2caec959" (UID: "cc7fc4b8-f53b-4377-9023-76db2caec959"). InnerVolumeSpecName "kube-api-access-mqkgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.204276 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fb04cb5-685c-45a2-aa8e-4af430329a31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.204320 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqkgp\" (UniqueName: \"kubernetes.io/projected/cc7fc4b8-f53b-4377-9023-76db2caec959-kube-api-access-mqkgp\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.204333 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xplsc\" (UniqueName: \"kubernetes.io/projected/4fb04cb5-685c-45a2-aa8e-4af430329a31-kube-api-access-xplsc\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.204346 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc7fc4b8-f53b-4377-9023-76db2caec959-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.217512 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.217513 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd" event={"ID":"4fb04cb5-685c-45a2-aa8e-4af430329a31","Type":"ContainerDied","Data":"22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659"} Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.217982 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22be0438269388bd4d2ed87bcd9aa217f4e0f3a37b76d5cd03c704b17d976659" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.219182 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-c6rlw" event={"ID":"1ec7e263-3178-47c9-934b-7e0f4d72aec7","Type":"ContainerStarted","Data":"15c364a3a9d0b6f5a90cf40801a4f1ef21d6cffc52d5e961e3514915ee2d8930"} Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.220955 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-27d9-account-create-update-52v2t" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.221387 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" event={"ID":"cc7fc4b8-f53b-4377-9023-76db2caec959","Type":"ContainerDied","Data":"dffeb74fb2360bc3497aabb40db651845f6464e63611a520adccc0fb06365f55"} Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.221444 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dffeb74fb2360bc3497aabb40db651845f6464e63611a520adccc0fb06365f55" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.221504 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-954b-account-create-update-wwrsc" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.221569 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-242n2" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.231204 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbkmx" Jan 26 15:58:58 crc kubenswrapper[4896]: I0126 15:58:58.235907 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nmtbk" Jan 26 15:58:59 crc kubenswrapper[4896]: I0126 15:58:59.334005 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbkmx"] Jan 26 15:58:59 crc kubenswrapper[4896]: I0126 15:58:59.343121 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sbkmx"] Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.240664 4896 generic.go:334] "Generic (PLEG): container finished" podID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerID="8ee4ce832a158d9875bc4598e8a7f21961800b70d3c724cc7128fbb12e1524fe" exitCode=0 Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.240906 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerDied","Data":"8ee4ce832a158d9875bc4598e8a7f21961800b70d3c724cc7128fbb12e1524fe"} Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.244065 4896 generic.go:334] "Generic (PLEG): container finished" podID="22577788-39b3-431e-9a18-7a15b8f66045" containerID="866e250b3dc594a32f2d37390a2c3e08821f48734dcc9202ca6c3e16478395fd" exitCode=0 Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.244108 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerDied","Data":"866e250b3dc594a32f2d37390a2c3e08821f48734dcc9202ca6c3e16478395fd"} Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.246657 4896 generic.go:334] "Generic (PLEG): container finished" podID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerID="758ae79e3f6e71ff84c487f54b312867f854bbbb4949ec8d0a1f4ca6a56ee855" exitCode=0 Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.246742 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerDied","Data":"758ae79e3f6e71ff84c487f54b312867f854bbbb4949ec8d0a1f4ca6a56ee855"} Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.275361 4896 generic.go:334] "Generic (PLEG): container finished" podID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerID="5285cbca490498b0756067eefe09f89d22391cf62f12a87dd3c066307f0e869f" exitCode=0 Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.275413 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerDied","Data":"5285cbca490498b0756067eefe09f89d22391cf62f12a87dd3c066307f0e869f"} Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.363560 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:59:00 crc kubenswrapper[4896]: E0126 15:59:00.366224 4896 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 15:59:00 crc kubenswrapper[4896]: E0126 15:59:00.366255 4896 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 15:59:00 crc kubenswrapper[4896]: E0126 15:59:00.366313 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift podName:56f3d7e7-114a-4790-ac11-1d5d191bdf40 nodeName:}" failed. No retries permitted until 2026-01-26 15:59:16.366293638 +0000 UTC m=+1514.148174031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift") pod "swift-storage-0" (UID: "56f3d7e7-114a-4790-ac11-1d5d191bdf40") : configmap "swift-ring-files" not found Jan 26 15:59:00 crc kubenswrapper[4896]: I0126 15:59:00.779693 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706a0a95-3212-477f-98de-61846e43ef58" path="/var/lib/kubelet/pods/706a0a95-3212-477f-98de-61846e43ef58/volumes" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.318416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerStarted","Data":"73ceff3dcd971d772975c5b6851f460cf95e1dc4c841ea2d7e11d18e8255150a"} Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.320461 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.326090 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerStarted","Data":"88b89da700a3a44ad0f898c495f3baa0bd9e98b34b88753a421b9955801f7582"} Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.326885 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.329232 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerStarted","Data":"9009b8e5135e69cd5846269082a241e82ed68550fd9d820c2d5ee3c5dc4197f6"} Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.329507 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.333373 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerStarted","Data":"680dda39973abc218473cca4b0209e5c1e737832a0bf4e4da361bdc98226e138"} Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.336382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerStarted","Data":"28c56f09214c1a2ace92b8b5e3a030297230b701bbe3e07a9e099ecf52c7b1a2"} Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.336728 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.367085 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.221372263 podStartE2EDuration="1m35.367059107s" podCreationTimestamp="2026-01-26 15:57:26 +0000 UTC" firstStartedPulling="2026-01-26 15:57:29.05388101 +0000 UTC m=+1406.835761403" lastFinishedPulling="2026-01-26 15:58:22.199567854 +0000 UTC m=+1459.981448247" observedRunningTime="2026-01-26 15:59:01.344336676 +0000 UTC m=+1499.126217069" watchObservedRunningTime="2026-01-26 15:59:01.367059107 +0000 UTC m=+1499.148939500" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.393512 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=42.335045281 podStartE2EDuration="1m35.393488939s" podCreationTimestamp="2026-01-26 15:57:26 +0000 UTC" firstStartedPulling="2026-01-26 15:57:29.140705346 +0000 UTC m=+1406.922585739" lastFinishedPulling="2026-01-26 15:58:22.199149004 +0000 UTC m=+1459.981029397" observedRunningTime="2026-01-26 15:59:01.385248216 +0000 UTC m=+1499.167128619" watchObservedRunningTime="2026-01-26 15:59:01.393488939 +0000 UTC m=+1499.175369332" Jan 26 15:59:01 crc kubenswrapper[4896]: I0126 15:59:01.464190 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.362088563 podStartE2EDuration="1m35.464167545s" podCreationTimestamp="2026-01-26 15:57:26 +0000 UTC" firstStartedPulling="2026-01-26 15:57:29.095817442 +0000 UTC m=+1406.877697835" lastFinishedPulling="2026-01-26 15:58:22.197896424 +0000 UTC m=+1459.979776817" observedRunningTime="2026-01-26 15:59:01.434113323 +0000 UTC m=+1499.215993726" watchObservedRunningTime="2026-01-26 15:59:01.464167545 +0000 UTC m=+1499.246047948" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.268397 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.291132 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlm9m" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.298818 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.859452352 podStartE2EDuration="1m36.298789189s" podCreationTimestamp="2026-01-26 15:57:26 +0000 UTC" firstStartedPulling="2026-01-26 15:57:29.088414251 +0000 UTC m=+1406.870294644" lastFinishedPulling="2026-01-26 15:58:16.527751088 +0000 UTC m=+1454.309631481" observedRunningTime="2026-01-26 15:59:01.472300466 +0000 UTC m=+1499.254180879" watchObservedRunningTime="2026-01-26 15:59:02.298789189 +0000 UTC m=+1500.080669602" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.350545 4896 generic.go:334] "Generic (PLEG): container finished" podID="ff5abeb5-5a6e-48b2-920f-fb1a55c83023" containerID="08a081d9e3703ce0b91fedbeaa7a05b3c66d671954a2c29d0d0f8a7201b59221" exitCode=0 Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.350624 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bbppj" event={"ID":"ff5abeb5-5a6e-48b2-920f-fb1a55c83023","Type":"ContainerDied","Data":"08a081d9e3703ce0b91fedbeaa7a05b3c66d671954a2c29d0d0f8a7201b59221"} Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.536439 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9bzf-config-jghvj"] Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.536989 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b4fb8a-de2a-43d2-b233-6ff17febcd58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537006 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b4fb8a-de2a-43d2-b233-6ff17febcd58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537040 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977332a1-d90d-45ee-b202-c8cbfa2b5ab9" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537049 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="977332a1-d90d-45ee-b202-c8cbfa2b5ab9" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537066 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972e84d7-224b-4b26-b685-9f822ba2d13e" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537077 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="972e84d7-224b-4b26-b685-9f822ba2d13e" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537093 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b87b14-02cf-4779-b5ad-964c37378a78" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537101 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b87b14-02cf-4779-b5ad-964c37378a78" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537126 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7fc4b8-f53b-4377-9023-76db2caec959" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537135 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7fc4b8-f53b-4377-9023-76db2caec959" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537149 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706a0a95-3212-477f-98de-61846e43ef58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537156 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="706a0a95-3212-477f-98de-61846e43ef58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: E0126 15:59:02.537172 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb04cb5-685c-45a2-aa8e-4af430329a31" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537180 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb04cb5-685c-45a2-aa8e-4af430329a31" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537402 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="706a0a95-3212-477f-98de-61846e43ef58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537417 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="972e84d7-224b-4b26-b685-9f822ba2d13e" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537432 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b4fb8a-de2a-43d2-b233-6ff17febcd58" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537448 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b87b14-02cf-4779-b5ad-964c37378a78" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537465 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb04cb5-685c-45a2-aa8e-4af430329a31" containerName="mariadb-database-create" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537486 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7fc4b8-f53b-4377-9023-76db2caec959" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.537499 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="977332a1-d90d-45ee-b202-c8cbfa2b5ab9" containerName="mariadb-account-create-update" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.539239 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.541632 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.554387 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf-config-jghvj"] Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.660749 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.660817 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.660913 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.666215 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.666263 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.666315 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndmc\" (UniqueName: \"kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.771876 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.771935 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.771970 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pndmc\" (UniqueName: \"kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.772066 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.772104 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.772187 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.772501 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.772706 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.774773 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.774867 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.776808 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.783831 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.815482 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndmc\" (UniqueName: \"kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc\") pod \"ovn-controller-c9bzf-config-jghvj\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.884457 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.969406 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7zwpk"] Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.971889 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.974681 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.975903 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lcqp\" (UniqueName: \"kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:02 crc kubenswrapper[4896]: I0126 15:59:02.976165 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.001921 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7zwpk"] Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.085797 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.086098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lcqp\" (UniqueName: \"kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.087941 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.132549 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lcqp\" (UniqueName: \"kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp\") pod \"root-account-create-update-7zwpk\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.321067 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.594500 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf-config-jghvj"] Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.809678 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.812524 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.815783 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.851664 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.937940 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.938009 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:03 crc kubenswrapper[4896]: I0126 15:59:03.938043 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjlw\" (UniqueName: \"kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.042603 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.042716 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.042776 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjlw\" (UniqueName: \"kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.053856 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.068876 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.276905 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjlw\" (UniqueName: \"kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw\") pod \"mysqld-exporter-0\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.460105 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 15:59:04 crc kubenswrapper[4896]: W0126 15:59:04.847337 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92d4664_6322_4314_999d_ac8a9aea0bca.slice/crio-6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8 WatchSource:0}: Error finding container 6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8: Status 404 returned error can't find the container with id 6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8 Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.935759 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.978571 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.978673 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.978843 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.978963 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.978997 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.979058 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrm5\" (UniqueName: \"kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.979104 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts\") pod \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\" (UID: \"ff5abeb5-5a6e-48b2-920f-fb1a55c83023\") " Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.979568 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.980545 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.990333 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:04 crc kubenswrapper[4896]: I0126 15:59:04.990869 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5" (OuterVolumeSpecName: "kube-api-access-4mrm5") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "kube-api-access-4mrm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.030218 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts" (OuterVolumeSpecName: "scripts") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.036521 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.067157 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ff5abeb5-5a6e-48b2-920f-fb1a55c83023" (UID: "ff5abeb5-5a6e-48b2-920f-fb1a55c83023"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081079 4896 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081114 4896 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081125 4896 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081136 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrm5\" (UniqueName: \"kubernetes.io/projected/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-kube-api-access-4mrm5\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081185 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081194 4896 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.081201 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5abeb5-5a6e-48b2-920f-fb1a55c83023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.458208 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-jghvj" event={"ID":"b92d4664-6322-4314-999d-ac8a9aea0bca","Type":"ContainerStarted","Data":"6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8"} Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.461391 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerStarted","Data":"a9a918770416c56498d3b8e76b09711dd0ce6557786f7a028c2f04b1e732a161"} Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.463572 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bbppj" event={"ID":"ff5abeb5-5a6e-48b2-920f-fb1a55c83023","Type":"ContainerDied","Data":"9f158b9568376d915f4a8de983d7d0075bbd6bffff182c8a2b9f7f3bc9c352d9"} Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.463694 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f158b9568376d915f4a8de983d7d0075bbd6bffff182c8a2b9f7f3bc9c352d9" Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.463798 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bbppj" Jan 26 15:59:05 crc kubenswrapper[4896]: W0126 15:59:05.631701 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f4800c_3439_4de1_a142_90b4d84d4c03.slice/crio-5485815550eea6a6e9c366fd067d7192755dae5be87cc6af1854508be424e270 WatchSource:0}: Error finding container 5485815550eea6a6e9c366fd067d7192755dae5be87cc6af1854508be424e270: Status 404 returned error can't find the container with id 5485815550eea6a6e9c366fd067d7192755dae5be87cc6af1854508be424e270 Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.660734 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 15:59:05 crc kubenswrapper[4896]: I0126 15:59:05.801392 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7zwpk"] Jan 26 15:59:05 crc kubenswrapper[4896]: W0126 15:59:05.810247 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67ea1132_2cfb_4d7d_a5ec_708ad47e8178.slice/crio-308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f WatchSource:0}: Error finding container 308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f: Status 404 returned error can't find the container with id 308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.485978 4896 generic.go:334] "Generic (PLEG): container finished" podID="67ea1132-2cfb-4d7d-a5ec-708ad47e8178" containerID="6109b1ca1be84dd1a7ed064477ed1c7e0979801626481bc6d588547a167765ae" exitCode=0 Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.486112 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zwpk" event={"ID":"67ea1132-2cfb-4d7d-a5ec-708ad47e8178","Type":"ContainerDied","Data":"6109b1ca1be84dd1a7ed064477ed1c7e0979801626481bc6d588547a167765ae"} Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.486372 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zwpk" event={"ID":"67ea1132-2cfb-4d7d-a5ec-708ad47e8178","Type":"ContainerStarted","Data":"308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f"} Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.500266 4896 generic.go:334] "Generic (PLEG): container finished" podID="b92d4664-6322-4314-999d-ac8a9aea0bca" containerID="92ab08f9f5e29b99ff4cd3c9fc48f23917b0e8ac72b8024ce91682624c9ae42a" exitCode=0 Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.500383 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-jghvj" event={"ID":"b92d4664-6322-4314-999d-ac8a9aea0bca","Type":"ContainerDied","Data":"92ab08f9f5e29b99ff4cd3c9fc48f23917b0e8ac72b8024ce91682624c9ae42a"} Jan 26 15:59:06 crc kubenswrapper[4896]: I0126 15:59:06.504863 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e8f4800c-3439-4de1-a142-90b4d84d4c03","Type":"ContainerStarted","Data":"5485815550eea6a6e9c366fd067d7192755dae5be87cc6af1854508be424e270"} Jan 26 15:59:07 crc kubenswrapper[4896]: I0126 15:59:07.249569 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c9bzf" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.115114 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.125960 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.200941 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pndmc\" (UniqueName: \"kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201174 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lcqp\" (UniqueName: \"kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp\") pod \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201365 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201448 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201479 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201498 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201591 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts\") pod \"b92d4664-6322-4314-999d-ac8a9aea0bca\" (UID: \"b92d4664-6322-4314-999d-ac8a9aea0bca\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.201618 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts\") pod \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\" (UID: \"67ea1132-2cfb-4d7d-a5ec-708ad47e8178\") " Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.202797 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.202908 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.202819 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run" (OuterVolumeSpecName: "var-run") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.203678 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.203902 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts" (OuterVolumeSpecName: "scripts") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.203964 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67ea1132-2cfb-4d7d-a5ec-708ad47e8178" (UID: "67ea1132-2cfb-4d7d-a5ec-708ad47e8178"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.204148 4896 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.204201 4896 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.204227 4896 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b92d4664-6322-4314-999d-ac8a9aea0bca-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.208915 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc" (OuterVolumeSpecName: "kube-api-access-pndmc") pod "b92d4664-6322-4314-999d-ac8a9aea0bca" (UID: "b92d4664-6322-4314-999d-ac8a9aea0bca"). InnerVolumeSpecName "kube-api-access-pndmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.209438 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp" (OuterVolumeSpecName: "kube-api-access-7lcqp") pod "67ea1132-2cfb-4d7d-a5ec-708ad47e8178" (UID: "67ea1132-2cfb-4d7d-a5ec-708ad47e8178"). InnerVolumeSpecName "kube-api-access-7lcqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.306393 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pndmc\" (UniqueName: \"kubernetes.io/projected/b92d4664-6322-4314-999d-ac8a9aea0bca-kube-api-access-pndmc\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.306438 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lcqp\" (UniqueName: \"kubernetes.io/projected/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-kube-api-access-7lcqp\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.306448 4896 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.306457 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92d4664-6322-4314-999d-ac8a9aea0bca-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.306466 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ea1132-2cfb-4d7d-a5ec-708ad47e8178-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.536262 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e8f4800c-3439-4de1-a142-90b4d84d4c03","Type":"ContainerStarted","Data":"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d"} Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.538873 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zwpk" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.538880 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zwpk" event={"ID":"67ea1132-2cfb-4d7d-a5ec-708ad47e8178","Type":"ContainerDied","Data":"308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f"} Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.538921 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308f6c624ab26427f6bb17ef4f3c65e167643b5d0a856629b77d62d7e941097f" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.556323 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-jghvj" event={"ID":"b92d4664-6322-4314-999d-ac8a9aea0bca","Type":"ContainerDied","Data":"6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8"} Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.556368 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3731cd569f9c5845e1af0da48490d78d8c11926b8fa2eed5c645ed07cd50e8" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.556404 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-jghvj" Jan 26 15:59:08 crc kubenswrapper[4896]: I0126 15:59:08.574810 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.94519958 podStartE2EDuration="5.574781898s" podCreationTimestamp="2026-01-26 15:59:03 +0000 UTC" firstStartedPulling="2026-01-26 15:59:05.635592294 +0000 UTC m=+1503.417472687" lastFinishedPulling="2026-01-26 15:59:07.265174612 +0000 UTC m=+1505.047055005" observedRunningTime="2026-01-26 15:59:08.56436587 +0000 UTC m=+1506.346246273" watchObservedRunningTime="2026-01-26 15:59:08.574781898 +0000 UTC m=+1506.356662291" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.265195 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c9bzf-config-jghvj"] Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.282346 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c9bzf-config-jghvj"] Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.363430 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c9bzf-config-tmh7l"] Jan 26 15:59:09 crc kubenswrapper[4896]: E0126 15:59:09.363943 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5abeb5-5a6e-48b2-920f-fb1a55c83023" containerName="swift-ring-rebalance" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.363963 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5abeb5-5a6e-48b2-920f-fb1a55c83023" containerName="swift-ring-rebalance" Jan 26 15:59:09 crc kubenswrapper[4896]: E0126 15:59:09.363975 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92d4664-6322-4314-999d-ac8a9aea0bca" containerName="ovn-config" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.363981 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92d4664-6322-4314-999d-ac8a9aea0bca" containerName="ovn-config" Jan 26 15:59:09 crc kubenswrapper[4896]: E0126 15:59:09.364010 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ea1132-2cfb-4d7d-a5ec-708ad47e8178" containerName="mariadb-account-create-update" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.364018 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ea1132-2cfb-4d7d-a5ec-708ad47e8178" containerName="mariadb-account-create-update" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.364238 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5abeb5-5a6e-48b2-920f-fb1a55c83023" containerName="swift-ring-rebalance" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.364264 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ea1132-2cfb-4d7d-a5ec-708ad47e8178" containerName="mariadb-account-create-update" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.364276 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92d4664-6322-4314-999d-ac8a9aea0bca" containerName="ovn-config" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.365115 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.367741 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.388815 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7zwpk"] Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.397129 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf-config-tmh7l"] Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.414649 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7zwpk"] Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.437900 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.438199 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnvt\" (UniqueName: \"kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.438348 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.438561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.438790 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.438945 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.540940 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.541353 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.541397 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.541523 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.541769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.541900 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.542014 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnvt\" (UniqueName: \"kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.542125 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.542218 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.542174 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.548426 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.595881 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnvt\" (UniqueName: \"kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt\") pod \"ovn-controller-c9bzf-config-tmh7l\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:09 crc kubenswrapper[4896]: I0126 15:59:09.689804 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:10 crc kubenswrapper[4896]: I0126 15:59:10.335423 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 15:59:10 crc kubenswrapper[4896]: I0126 15:59:10.782859 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ea1132-2cfb-4d7d-a5ec-708ad47e8178" path="/var/lib/kubelet/pods/67ea1132-2cfb-4d7d-a5ec-708ad47e8178/volumes" Jan 26 15:59:10 crc kubenswrapper[4896]: I0126 15:59:10.783867 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92d4664-6322-4314-999d-ac8a9aea0bca" path="/var/lib/kubelet/pods/b92d4664-6322-4314-999d-ac8a9aea0bca/volumes" Jan 26 15:59:12 crc kubenswrapper[4896]: I0126 15:59:12.993475 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-95bfw"] Jan 26 15:59:12 crc kubenswrapper[4896]: I0126 15:59:12.995328 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:12 crc kubenswrapper[4896]: I0126 15:59:12.998511 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.015624 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-95bfw"] Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.031452 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.032062 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2nh\" (UniqueName: \"kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.135069 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml2nh\" (UniqueName: \"kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.136269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.137160 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.162296 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml2nh\" (UniqueName: \"kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh\") pod \"root-account-create-update-95bfw\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:13 crc kubenswrapper[4896]: I0126 15:59:13.329116 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:16 crc kubenswrapper[4896]: I0126 15:59:16.410089 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:59:16 crc kubenswrapper[4896]: I0126 15:59:16.418314 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56f3d7e7-114a-4790-ac11-1d5d191bdf40-etc-swift\") pod \"swift-storage-0\" (UID: \"56f3d7e7-114a-4790-ac11-1d5d191bdf40\") " pod="openstack/swift-storage-0" Jan 26 15:59:16 crc kubenswrapper[4896]: I0126 15:59:16.654493 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 15:59:17 crc kubenswrapper[4896]: I0126 15:59:17.909613 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 26 15:59:17 crc kubenswrapper[4896]: I0126 15:59:17.971304 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 26 15:59:17 crc kubenswrapper[4896]: I0126 15:59:17.984590 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 26 15:59:18 crc kubenswrapper[4896]: I0126 15:59:18.106788 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:59:19 crc kubenswrapper[4896]: E0126 15:59:19.859046 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 26 15:59:19 crc kubenswrapper[4896]: E0126 15:59:19.859240 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k52n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-c6rlw_openstack(1ec7e263-3178-47c9-934b-7e0f4d72aec7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:59:19 crc kubenswrapper[4896]: E0126 15:59:19.861236 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-c6rlw" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.601111 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 15:59:20 crc kubenswrapper[4896]: W0126 15:59:20.609923 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f3d7e7_114a_4790_ac11_1d5d191bdf40.slice/crio-d05375852d3bb24433efa1f803b3e31c2f7915114eacea8dec80004153c3be2e WatchSource:0}: Error finding container d05375852d3bb24433efa1f803b3e31c2f7915114eacea8dec80004153c3be2e: Status 404 returned error can't find the container with id d05375852d3bb24433efa1f803b3e31c2f7915114eacea8dec80004153c3be2e Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.660857 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-95bfw"] Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.672472 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c9bzf-config-tmh7l"] Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.699893 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-95bfw" event={"ID":"1cf10826-3fea-4169-8be2-e497422f2c85","Type":"ContainerStarted","Data":"d1b675a62ab438a2b772c9a9fc17c2aab219b5b6d8cb11762834e762e1cdfc94"} Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.708986 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerStarted","Data":"f09f5505c91c9d9d896b62cf5b1e9084a18205622f6e34c6acc7edb70056d0f2"} Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.710840 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"d05375852d3bb24433efa1f803b3e31c2f7915114eacea8dec80004153c3be2e"} Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.714707 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-tmh7l" event={"ID":"2d14ce12-054b-40b3-a86f-d811ac5878c3","Type":"ContainerStarted","Data":"115a0724cf1f12221c5942b1d6ab05da5ba4124c276ccc0e8137bdb3cf4542d3"} Jan 26 15:59:20 crc kubenswrapper[4896]: E0126 15:59:20.726514 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-c6rlw" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" Jan 26 15:59:20 crc kubenswrapper[4896]: I0126 15:59:20.742241 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.390430613 podStartE2EDuration="1m48.742215586s" podCreationTimestamp="2026-01-26 15:57:32 +0000 UTC" firstStartedPulling="2026-01-26 15:57:41.583479643 +0000 UTC m=+1419.365360036" lastFinishedPulling="2026-01-26 15:59:19.935264616 +0000 UTC m=+1517.717145009" observedRunningTime="2026-01-26 15:59:20.740911694 +0000 UTC m=+1518.522792087" watchObservedRunningTime="2026-01-26 15:59:20.742215586 +0000 UTC m=+1518.524095979" Jan 26 15:59:21 crc kubenswrapper[4896]: I0126 15:59:21.725980 4896 generic.go:334] "Generic (PLEG): container finished" podID="2d14ce12-054b-40b3-a86f-d811ac5878c3" containerID="29313532ea10dde256af2bec146c3f27f0789492e5d43c296daef00aab9a35b4" exitCode=0 Jan 26 15:59:21 crc kubenswrapper[4896]: I0126 15:59:21.726150 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-tmh7l" event={"ID":"2d14ce12-054b-40b3-a86f-d811ac5878c3","Type":"ContainerDied","Data":"29313532ea10dde256af2bec146c3f27f0789492e5d43c296daef00aab9a35b4"} Jan 26 15:59:21 crc kubenswrapper[4896]: I0126 15:59:21.730720 4896 generic.go:334] "Generic (PLEG): container finished" podID="1cf10826-3fea-4169-8be2-e497422f2c85" containerID="0b32630aaf2a4014472660407750908211d58c8b460a9e07dc6fea5a137366d0" exitCode=0 Jan 26 15:59:21 crc kubenswrapper[4896]: I0126 15:59:21.731932 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-95bfw" event={"ID":"1cf10826-3fea-4169-8be2-e497422f2c85","Type":"ContainerDied","Data":"0b32630aaf2a4014472660407750908211d58c8b460a9e07dc6fea5a137366d0"} Jan 26 15:59:22 crc kubenswrapper[4896]: I0126 15:59:22.742741 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"07e379c1fe2a5e17009904837639b600c54dec2340234649ac16df4d8a0d1f78"} Jan 26 15:59:22 crc kubenswrapper[4896]: I0126 15:59:22.743171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"863a4f99b956a5f294cf4672cfcf50b43fc2ce3d4c4dcdf145a7c251d0904483"} Jan 26 15:59:22 crc kubenswrapper[4896]: I0126 15:59:22.743188 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"54a70724b3f4845475d922de5f569e2b1cfc2aa4ded78a7783e74c8b78df9fee"} Jan 26 15:59:22 crc kubenswrapper[4896]: I0126 15:59:22.743201 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"c494261b1b99a11793dcdd6eb0aad25dee51b631fea81d825d20b27b1f94bd4f"} Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.270347 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.278392 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377036 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhnvt\" (UniqueName: \"kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377197 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2nh\" (UniqueName: \"kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh\") pod \"1cf10826-3fea-4169-8be2-e497422f2c85\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377299 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377484 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377509 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377566 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run" (OuterVolumeSpecName: "var-run") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377706 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts\") pod \"1cf10826-3fea-4169-8be2-e497422f2c85\" (UID: \"1cf10826-3fea-4169-8be2-e497422f2c85\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377798 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377940 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts\") pod \"2d14ce12-054b-40b3-a86f-d811ac5878c3\" (UID: \"2d14ce12-054b-40b3-a86f-d811ac5878c3\") " Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.377956 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.378318 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.378358 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1cf10826-3fea-4169-8be2-e497422f2c85" (UID: "1cf10826-3fea-4169-8be2-e497422f2c85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.378981 4896 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.379013 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cf10826-3fea-4169-8be2-e497422f2c85-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.379030 4896 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.379043 4896 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d14ce12-054b-40b3-a86f-d811ac5878c3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.379054 4896 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.379378 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts" (OuterVolumeSpecName: "scripts") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.389972 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt" (OuterVolumeSpecName: "kube-api-access-vhnvt") pod "2d14ce12-054b-40b3-a86f-d811ac5878c3" (UID: "2d14ce12-054b-40b3-a86f-d811ac5878c3"). InnerVolumeSpecName "kube-api-access-vhnvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.394016 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh" (OuterVolumeSpecName: "kube-api-access-ml2nh") pod "1cf10826-3fea-4169-8be2-e497422f2c85" (UID: "1cf10826-3fea-4169-8be2-e497422f2c85"). InnerVolumeSpecName "kube-api-access-ml2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.481377 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhnvt\" (UniqueName: \"kubernetes.io/projected/2d14ce12-054b-40b3-a86f-d811ac5878c3-kube-api-access-vhnvt\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.481426 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml2nh\" (UniqueName: \"kubernetes.io/projected/1cf10826-3fea-4169-8be2-e497422f2c85-kube-api-access-ml2nh\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.481442 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d14ce12-054b-40b3-a86f-d811ac5878c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.768120 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-95bfw" event={"ID":"1cf10826-3fea-4169-8be2-e497422f2c85","Type":"ContainerDied","Data":"d1b675a62ab438a2b772c9a9fc17c2aab219b5b6d8cb11762834e762e1cdfc94"} Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.769437 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b675a62ab438a2b772c9a9fc17c2aab219b5b6d8cb11762834e762e1cdfc94" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.768159 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-95bfw" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.770959 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c9bzf-config-tmh7l" event={"ID":"2d14ce12-054b-40b3-a86f-d811ac5878c3","Type":"ContainerDied","Data":"115a0724cf1f12221c5942b1d6ab05da5ba4124c276ccc0e8137bdb3cf4542d3"} Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.770989 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="115a0724cf1f12221c5942b1d6ab05da5ba4124c276ccc0e8137bdb3cf4542d3" Jan 26 15:59:23 crc kubenswrapper[4896]: I0126 15:59:23.771025 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c9bzf-config-tmh7l" Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.366000 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c9bzf-config-tmh7l"] Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.378250 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c9bzf-config-tmh7l"] Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.391601 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-95bfw"] Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.402900 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-95bfw"] Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.499039 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.782751 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf10826-3fea-4169-8be2-e497422f2c85" path="/var/lib/kubelet/pods/1cf10826-3fea-4169-8be2-e497422f2c85/volumes" Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.786752 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d14ce12-054b-40b3-a86f-d811ac5878c3" path="/var/lib/kubelet/pods/2d14ce12-054b-40b3-a86f-d811ac5878c3/volumes" Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.800906 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"8e16c6c5f9d28101521506b74db8a64c4541741022180a43dafb36dd0e583cd8"} Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.800953 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"a75a35132fe2d5f6e53b8f2ae53339a1d3ade3f6dbf63952c3337e86af205879"} Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.800963 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"6104ae6cb5d7531e41b5f108e15f79ca2cdd3c5132999aa3335e5910b0fda24e"} Jan 26 15:59:24 crc kubenswrapper[4896]: I0126 15:59:24.800975 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"3339c261f42390073acffd69313e3575dd8b81bb42ad0cf4a95837970df3c2a9"} Jan 26 15:59:25 crc kubenswrapper[4896]: I0126 15:59:25.290453 4896 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod309b8de4-298f-4828-9197-b06d2a7ddcf9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod309b8de4-298f-4828-9197-b06d2a7ddcf9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod309b8de4_298f_4828_9197_b06d2a7ddcf9.slice" Jan 26 15:59:26 crc kubenswrapper[4896]: I0126 15:59:26.834370 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"ffa8961e34e511e08b2da35376d69e41c97c2d488fe2be6ca3281f36ed2069c5"} Jan 26 15:59:26 crc kubenswrapper[4896]: I0126 15:59:26.834945 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"186dfb0c66ba66dad77c1daf04aa6b5ec0f5130e3188f9277ecb0e94c6e927c8"} Jan 26 15:59:26 crc kubenswrapper[4896]: I0126 15:59:26.834957 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"5c17d408a276829c27fddd7096d36c0a7d355d51c88be4cf8e2887c36c6d4ad4"} Jan 26 15:59:27 crc kubenswrapper[4896]: I0126 15:59:27.841931 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"b5b146ff45e6bcb55bc7d2b881127a9d9d88abfdbe02050665b147ce56504893"} Jan 26 15:59:27 crc kubenswrapper[4896]: I0126 15:59:27.842289 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"058e98ace9e9152cf9bc228d267102c3c442304d716f09c7fa0c896203064f6b"} Jan 26 15:59:27 crc kubenswrapper[4896]: I0126 15:59:27.911126 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 15:59:27 crc kubenswrapper[4896]: I0126 15:59:27.974513 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 26 15:59:27 crc kubenswrapper[4896]: I0126 15:59:27.986286 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.031131 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ck257"] Jan 26 15:59:28 crc kubenswrapper[4896]: E0126 15:59:28.031695 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf10826-3fea-4169-8be2-e497422f2c85" containerName="mariadb-account-create-update" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.031716 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf10826-3fea-4169-8be2-e497422f2c85" containerName="mariadb-account-create-update" Jan 26 15:59:28 crc kubenswrapper[4896]: E0126 15:59:28.031773 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d14ce12-054b-40b3-a86f-d811ac5878c3" containerName="ovn-config" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.031782 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d14ce12-054b-40b3-a86f-d811ac5878c3" containerName="ovn-config" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.032023 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf10826-3fea-4169-8be2-e497422f2c85" containerName="mariadb-account-create-update" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.032061 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d14ce12-054b-40b3-a86f-d811ac5878c3" containerName="ovn-config" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.032971 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.035057 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.074567 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ck257"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.094401 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.094463 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjvj\" (UniqueName: \"kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.204347 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.204417 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjvj\" (UniqueName: \"kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.214904 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.261300 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjvj\" (UniqueName: \"kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj\") pod \"root-account-create-update-ck257\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.368453 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ck257" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.503168 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ztrfr"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.504750 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.531086 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ztrfr"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.583029 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-61e0-account-create-update-5kcft"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.584664 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.596089 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.620956 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbglk\" (UniqueName: \"kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.621092 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.621237 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phs46\" (UniqueName: \"kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.621273 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.688947 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-61e0-account-create-update-5kcft"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.716747 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-ksf8v"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.722912 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.723142 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phs46\" (UniqueName: \"kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.723190 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.723276 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbglk\" (UniqueName: \"kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.723383 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.724762 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.735531 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.751797 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ksf8v"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.753137 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbglk\" (UniqueName: \"kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk\") pod \"barbican-db-create-ztrfr\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.759246 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phs46\" (UniqueName: \"kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46\") pod \"barbican-61e0-account-create-update-5kcft\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.827914 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78qx\" (UniqueName: \"kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.828061 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.839536 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.844231 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f638-account-create-update-7nblt"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.845822 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.853738 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.873379 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f638-account-create-update-7nblt"] Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.932069 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jx47\" (UniqueName: \"kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.932115 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78qx\" (UniqueName: \"kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.932333 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.932421 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.933213 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:28 crc kubenswrapper[4896]: I0126 15:59:28.935002 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:28.983683 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78qx\" (UniqueName: \"kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx\") pod \"heat-db-create-ksf8v\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:28.992841 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"8f5bd715cb68ae859c544a5c28ba7df65c40c53c0781739a1eafbec80417c4e4"} Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:28.992894 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"56f3d7e7-114a-4790-ac11-1d5d191bdf40","Type":"ContainerStarted","Data":"c6618c01936e0bc39ecb6f2ca890432d1922a3f257a91d2163836c55f2f292ea"} Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:28.999848 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jnwlw"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.002684 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.046252 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jx47\" (UniqueName: \"kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.046422 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.049324 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.095685 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6czxg"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.105698 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.123594 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.124182 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jx47\" (UniqueName: \"kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47\") pod \"cinder-f638-account-create-update-7nblt\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.133298 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.141214 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.146160 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.151143 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.151249 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gvp\" (UniqueName: \"kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.156785 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jnwlw"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.180415 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fkw6" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.198041 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.201915 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6czxg"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.253161 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.253276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.253319 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gvp\" (UniqueName: \"kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.253360 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.253400 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx9mr\" (UniqueName: \"kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.254563 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.282719 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jbq9k"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.284495 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.337633 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-6f58-account-create-update-w9947"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.339171 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.352624 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gvp\" (UniqueName: \"kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp\") pod \"cinder-db-create-jnwlw\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.356618 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.357883 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.357924 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.357951 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx9mr\" (UniqueName: \"kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.363299 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.369264 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.400406 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jbq9k"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.401650 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.452218 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6f58-account-create-update-w9947"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.460260 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblg2\" (UniqueName: \"kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.460377 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.460417 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qznx\" (UniqueName: \"kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.460478 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.480536 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx9mr\" (UniqueName: \"kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr\") pod \"keystone-db-sync-6czxg\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.496261 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.219798408 podStartE2EDuration="47.496238468s" podCreationTimestamp="2026-01-26 15:58:42 +0000 UTC" firstStartedPulling="2026-01-26 15:59:20.613079427 +0000 UTC m=+1518.394959820" lastFinishedPulling="2026-01-26 15:59:25.889519487 +0000 UTC m=+1523.671399880" observedRunningTime="2026-01-26 15:59:29.070316829 +0000 UTC m=+1526.852197222" watchObservedRunningTime="2026-01-26 15:59:29.496238468 +0000 UTC m=+1527.278118861" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.572516 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vblg2\" (UniqueName: \"kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.572804 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.572923 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qznx\" (UniqueName: \"kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.573140 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.584357 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.585057 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.667183 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vblg2\" (UniqueName: \"kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2\") pod \"neutron-db-create-jbq9k\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.678256 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.702047 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ck257"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.740687 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.776562 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qznx\" (UniqueName: \"kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx\") pod \"heat-6f58-account-create-update-w9947\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.862819 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fbf0-account-create-update-2mczz"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.864829 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.867807 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.888863 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.888954 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzrz\" (UniqueName: \"kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.981668 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fbf0-account-create-update-2mczz"] Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.993940 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.994045 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqzrz\" (UniqueName: \"kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:29 crc kubenswrapper[4896]: I0126 15:59:29.995184 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.022704 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ztrfr"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.061312 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.075422 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqzrz\" (UniqueName: \"kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz\") pod \"neutron-fbf0-account-create-update-2mczz\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.092317 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ck257" event={"ID":"1be6fec9-0b86-44e6-b6fa-38cda329656a","Type":"ContainerStarted","Data":"4cd37daf9a5a1965357d245701cf624e9134ff8893e0a6b7002eec15c9a6b4d3"} Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.092369 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ck257" event={"ID":"1be6fec9-0b86-44e6-b6fa-38cda329656a","Type":"ContainerStarted","Data":"0b1ee6b500a1a21e97313ac4822244cc0e7c676c2e5898c4a0592cbd562b87a9"} Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.116544 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.135807 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztrfr" event={"ID":"db906256-70e6-4b0b-b691-9e8958a9ae3b","Type":"ContainerStarted","Data":"45bcaed10d322ae36647db5b68c588678c520733e1b6633cb1a1fd4d0d33c3fe"} Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.136003 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.141770 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.197755 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.198665 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ck257" podStartSLOduration=3.198650127 podStartE2EDuration="3.198650127s" podCreationTimestamp="2026-01-26 15:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:30.135224301 +0000 UTC m=+1527.917104704" watchObservedRunningTime="2026-01-26 15:59:30.198650127 +0000 UTC m=+1527.980530520" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.233633 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.309142 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-61e0-account-create-update-5kcft"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.314751 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.314803 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.314850 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4fw\" (UniqueName: \"kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.314922 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.315076 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.315400 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417278 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417342 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417374 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4fw\" (UniqueName: \"kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417428 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417451 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.417540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.422532 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.422967 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.423132 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.424238 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.424247 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.458939 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4fw\" (UniqueName: \"kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw\") pod \"dnsmasq-dns-764c5664d7-mjx6f\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.520960 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ksf8v"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.540207 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jnwlw"] Jan 26 15:59:30 crc kubenswrapper[4896]: W0126 15:59:30.684437 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe41022_a923_4089_9328_25839bc6bc7e.slice/crio-0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745 WatchSource:0}: Error finding container 0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745: Status 404 returned error can't find the container with id 0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745 Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.741978 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f638-account-create-update-7nblt"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.764343 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:30 crc kubenswrapper[4896]: W0126 15:59:30.771984 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf16b1b94_6103_4f26_b71f_070ea624c017.slice/crio-70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d WatchSource:0}: Error finding container 70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d: Status 404 returned error can't find the container with id 70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.894336 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6czxg"] Jan 26 15:59:30 crc kubenswrapper[4896]: I0126 15:59:30.915193 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jbq9k"] Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.169667 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jnwlw" event={"ID":"abe41022-a923-4089-9328-25839bc6bc7e","Type":"ContainerStarted","Data":"0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.180774 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jbq9k" event={"ID":"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9","Type":"ContainerStarted","Data":"fd0bdfda6a9a4de59b8c002f3ba51fd864021b8fe877bc63a07a01c8de5b14dd"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.189284 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6f58-account-create-update-w9947"] Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.197222 4896 generic.go:334] "Generic (PLEG): container finished" podID="1be6fec9-0b86-44e6-b6fa-38cda329656a" containerID="4cd37daf9a5a1965357d245701cf624e9134ff8893e0a6b7002eec15c9a6b4d3" exitCode=0 Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.197693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ck257" event={"ID":"1be6fec9-0b86-44e6-b6fa-38cda329656a","Type":"ContainerDied","Data":"4cd37daf9a5a1965357d245701cf624e9134ff8893e0a6b7002eec15c9a6b4d3"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.230056 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-61e0-account-create-update-5kcft" event={"ID":"e7a69683-ae94-4df8-bb58-83dd08d62052","Type":"ContainerStarted","Data":"4a528bd9055ebac26e422c4a86e30be4c686cd01aeb5836d4073582ead139cd2"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.230791 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-61e0-account-create-update-5kcft" event={"ID":"e7a69683-ae94-4df8-bb58-83dd08d62052","Type":"ContainerStarted","Data":"93376b9152cddbdbd7c0f92b717d4c646e844c8fb867928811dbaf765f9af1b0"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.244456 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f638-account-create-update-7nblt" event={"ID":"f16b1b94-6103-4f26-b71f-070ea624c017","Type":"ContainerStarted","Data":"70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.265223 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztrfr" event={"ID":"db906256-70e6-4b0b-b691-9e8958a9ae3b","Type":"ContainerStarted","Data":"2972d6ce907361b279f028ba3a878b8accb0cc285b8b2c611e86d88dc9e62f4b"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.272766 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6czxg" event={"ID":"38a9be81-a6ab-4b04-9796-2f923678d8a9","Type":"ContainerStarted","Data":"1770bf4fc2eeee2c5a45b66ffd9146c7832e8d45aaa462f3d2ac9a676d4edc71"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.297519 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ksf8v" event={"ID":"faa2e119-517e-47d8-b26e-424f2de5df1f","Type":"ContainerStarted","Data":"7790521603ff29c1ffa05e3b69545f7bd759e09d41b17d2142ec9ed3f326aaa1"} Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.317743 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-61e0-account-create-update-5kcft" podStartSLOduration=3.317718016 podStartE2EDuration="3.317718016s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:31.288191387 +0000 UTC m=+1529.070071780" watchObservedRunningTime="2026-01-26 15:59:31.317718016 +0000 UTC m=+1529.099598409" Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.346836 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-ztrfr" podStartSLOduration=3.346806245 podStartE2EDuration="3.346806245s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:31.315467551 +0000 UTC m=+1529.097347944" watchObservedRunningTime="2026-01-26 15:59:31.346806245 +0000 UTC m=+1529.128686638" Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.372718 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fbf0-account-create-update-2mczz"] Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.375287 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-ksf8v" podStartSLOduration=3.375259427 podStartE2EDuration="3.375259427s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:31.340100579 +0000 UTC m=+1529.121980982" watchObservedRunningTime="2026-01-26 15:59:31.375259427 +0000 UTC m=+1529.157139820" Jan 26 15:59:31 crc kubenswrapper[4896]: W0126 15:59:31.415032 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod627735ca_7a3a_436c_a3fe_5634fc742384.slice/crio-89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8 WatchSource:0}: Error finding container 89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8: Status 404 returned error can't find the container with id 89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8 Jan 26 15:59:31 crc kubenswrapper[4896]: I0126 15:59:31.616306 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.310695 4896 generic.go:334] "Generic (PLEG): container finished" podID="627735ca-7a3a-436c-a3fe-5634fc742384" containerID="431899ec495623069ca44f5644f7f47296835832fee090b890fb0e63f1501d4b" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.310807 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf0-account-create-update-2mczz" event={"ID":"627735ca-7a3a-436c-a3fe-5634fc742384","Type":"ContainerDied","Data":"431899ec495623069ca44f5644f7f47296835832fee090b890fb0e63f1501d4b"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.311222 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf0-account-create-update-2mczz" event={"ID":"627735ca-7a3a-436c-a3fe-5634fc742384","Type":"ContainerStarted","Data":"89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.315292 4896 generic.go:334] "Generic (PLEG): container finished" podID="cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" containerID="3fe2ff3e6b0a09ebfc8a38214bfe86e38a06b884bec01a19e545752c281c9bca" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.315380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jbq9k" event={"ID":"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9","Type":"ContainerDied","Data":"3fe2ff3e6b0a09ebfc8a38214bfe86e38a06b884bec01a19e545752c281c9bca"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.326268 4896 generic.go:334] "Generic (PLEG): container finished" podID="2286da81-a82e-4026-a39f-1c712480bc2d" containerID="a056e8d3200f05a106e01eba09d053cb58238e8614fae8ebc24b94d3d2d3ea5f" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.326500 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" event={"ID":"2286da81-a82e-4026-a39f-1c712480bc2d","Type":"ContainerDied","Data":"a056e8d3200f05a106e01eba09d053cb58238e8614fae8ebc24b94d3d2d3ea5f"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.326614 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" event={"ID":"2286da81-a82e-4026-a39f-1c712480bc2d","Type":"ContainerStarted","Data":"ec544b94e368561de4cf20ec21f4fbf7f4193149cf376d24676afccfe00e6930"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.336619 4896 generic.go:334] "Generic (PLEG): container finished" podID="e7a69683-ae94-4df8-bb58-83dd08d62052" containerID="4a528bd9055ebac26e422c4a86e30be4c686cd01aeb5836d4073582ead139cd2" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.336724 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-61e0-account-create-update-5kcft" event={"ID":"e7a69683-ae94-4df8-bb58-83dd08d62052","Type":"ContainerDied","Data":"4a528bd9055ebac26e422c4a86e30be4c686cd01aeb5836d4073582ead139cd2"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.339432 4896 generic.go:334] "Generic (PLEG): container finished" podID="db906256-70e6-4b0b-b691-9e8958a9ae3b" containerID="2972d6ce907361b279f028ba3a878b8accb0cc285b8b2c611e86d88dc9e62f4b" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.339540 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztrfr" event={"ID":"db906256-70e6-4b0b-b691-9e8958a9ae3b","Type":"ContainerDied","Data":"2972d6ce907361b279f028ba3a878b8accb0cc285b8b2c611e86d88dc9e62f4b"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.341521 4896 generic.go:334] "Generic (PLEG): container finished" podID="abe41022-a923-4089-9328-25839bc6bc7e" containerID="a5cc9de5d83c164b47a5ceef5ba83a2d89c1f2967304f9aa3d3d82617f1ec216" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.341723 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jnwlw" event={"ID":"abe41022-a923-4089-9328-25839bc6bc7e","Type":"ContainerDied","Data":"a5cc9de5d83c164b47a5ceef5ba83a2d89c1f2967304f9aa3d3d82617f1ec216"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.345557 4896 generic.go:334] "Generic (PLEG): container finished" podID="faa2e119-517e-47d8-b26e-424f2de5df1f" containerID="9f90d6b559368109402b7718334fdea7cd556bdd1c16efdd3300877f2013ef0f" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.345786 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ksf8v" event={"ID":"faa2e119-517e-47d8-b26e-424f2de5df1f","Type":"ContainerDied","Data":"9f90d6b559368109402b7718334fdea7cd556bdd1c16efdd3300877f2013ef0f"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.349787 4896 generic.go:334] "Generic (PLEG): container finished" podID="c9bf55ae-99a9-403d-951d-51085bd87019" containerID="42df16647bf0d946d84ec5c636b589b9b01e7d2e18399b94d2dbcf604e1123f6" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.349882 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f58-account-create-update-w9947" event={"ID":"c9bf55ae-99a9-403d-951d-51085bd87019","Type":"ContainerDied","Data":"42df16647bf0d946d84ec5c636b589b9b01e7d2e18399b94d2dbcf604e1123f6"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.349915 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f58-account-create-update-w9947" event={"ID":"c9bf55ae-99a9-403d-951d-51085bd87019","Type":"ContainerStarted","Data":"b6b14bcb4df01f67ff91d25c4d58ece1164e8009968a570b947990a7e71cdd27"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.355369 4896 generic.go:334] "Generic (PLEG): container finished" podID="f16b1b94-6103-4f26-b71f-070ea624c017" containerID="84df3adbbd9a03cc9493b1b302f0ba68c90f5a46e3976b640396850cac6157f9" exitCode=0 Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.355902 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f638-account-create-update-7nblt" event={"ID":"f16b1b94-6103-4f26-b71f-070ea624c017","Type":"ContainerDied","Data":"84df3adbbd9a03cc9493b1b302f0ba68c90f5a46e3976b640396850cac6157f9"} Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.817330 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ck257" Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.923524 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjvj\" (UniqueName: \"kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj\") pod \"1be6fec9-0b86-44e6-b6fa-38cda329656a\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.923691 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts\") pod \"1be6fec9-0b86-44e6-b6fa-38cda329656a\" (UID: \"1be6fec9-0b86-44e6-b6fa-38cda329656a\") " Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.924628 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1be6fec9-0b86-44e6-b6fa-38cda329656a" (UID: "1be6fec9-0b86-44e6-b6fa-38cda329656a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:32 crc kubenswrapper[4896]: I0126 15:59:32.928354 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj" (OuterVolumeSpecName: "kube-api-access-jbjvj") pod "1be6fec9-0b86-44e6-b6fa-38cda329656a" (UID: "1be6fec9-0b86-44e6-b6fa-38cda329656a"). InnerVolumeSpecName "kube-api-access-jbjvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.026836 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjvj\" (UniqueName: \"kubernetes.io/projected/1be6fec9-0b86-44e6-b6fa-38cda329656a-kube-api-access-jbjvj\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.026878 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be6fec9-0b86-44e6-b6fa-38cda329656a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.368684 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" event={"ID":"2286da81-a82e-4026-a39f-1c712480bc2d","Type":"ContainerStarted","Data":"e1f2c96705d92742d880dc7b7414f189b4bcec253ef20f538c8637e49b143af0"} Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.368834 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.370354 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ck257" event={"ID":"1be6fec9-0b86-44e6-b6fa-38cda329656a","Type":"ContainerDied","Data":"0b1ee6b500a1a21e97313ac4822244cc0e7c676c2e5898c4a0592cbd562b87a9"} Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.370382 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1ee6b500a1a21e97313ac4822244cc0e7c676c2e5898c4a0592cbd562b87a9" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.370498 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ck257" Jan 26 15:59:33 crc kubenswrapper[4896]: I0126 15:59:33.391210 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podStartSLOduration=3.391179658 podStartE2EDuration="3.391179658s" podCreationTimestamp="2026-01-26 15:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:33.388133513 +0000 UTC m=+1531.170013906" watchObservedRunningTime="2026-01-26 15:59:33.391179658 +0000 UTC m=+1531.173060051" Jan 26 15:59:34 crc kubenswrapper[4896]: I0126 15:59:34.499833 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:34 crc kubenswrapper[4896]: I0126 15:59:34.504935 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:34 crc kubenswrapper[4896]: I0126 15:59:34.547630 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ck257"] Jan 26 15:59:34 crc kubenswrapper[4896]: I0126 15:59:34.566216 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ck257"] Jan 26 15:59:34 crc kubenswrapper[4896]: I0126 15:59:34.829220 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be6fec9-0b86-44e6-b6fa-38cda329656a" path="/var/lib/kubelet/pods/1be6fec9-0b86-44e6-b6fa-38cda329656a/volumes" Jan 26 15:59:35 crc kubenswrapper[4896]: I0126 15:59:35.395065 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.067045 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.077624 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.088888 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.109893 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.133957 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.148385 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.157788 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.178718 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.246812 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts\") pod \"abe41022-a923-4089-9328-25839bc6bc7e\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.246966 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jx47\" (UniqueName: \"kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47\") pod \"f16b1b94-6103-4f26-b71f-070ea624c017\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247113 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phs46\" (UniqueName: \"kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46\") pod \"e7a69683-ae94-4df8-bb58-83dd08d62052\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247356 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l78qx\" (UniqueName: \"kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx\") pod \"faa2e119-517e-47d8-b26e-424f2de5df1f\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247446 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts\") pod \"f16b1b94-6103-4f26-b71f-070ea624c017\" (UID: \"f16b1b94-6103-4f26-b71f-070ea624c017\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247532 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqzrz\" (UniqueName: \"kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz\") pod \"627735ca-7a3a-436c-a3fe-5634fc742384\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247623 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vblg2\" (UniqueName: \"kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2\") pod \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247714 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts\") pod \"e7a69683-ae94-4df8-bb58-83dd08d62052\" (UID: \"e7a69683-ae94-4df8-bb58-83dd08d62052\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247801 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts\") pod \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\" (UID: \"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.247974 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts\") pod \"627735ca-7a3a-436c-a3fe-5634fc742384\" (UID: \"627735ca-7a3a-436c-a3fe-5634fc742384\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.248082 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gvp\" (UniqueName: \"kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp\") pod \"abe41022-a923-4089-9328-25839bc6bc7e\" (UID: \"abe41022-a923-4089-9328-25839bc6bc7e\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.248205 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts\") pod \"faa2e119-517e-47d8-b26e-424f2de5df1f\" (UID: \"faa2e119-517e-47d8-b26e-424f2de5df1f\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.250060 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faa2e119-517e-47d8-b26e-424f2de5df1f" (UID: "faa2e119-517e-47d8-b26e-424f2de5df1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.250565 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7a69683-ae94-4df8-bb58-83dd08d62052" (UID: "e7a69683-ae94-4df8-bb58-83dd08d62052"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.250731 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abe41022-a923-4089-9328-25839bc6bc7e" (UID: "abe41022-a923-4089-9328-25839bc6bc7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.251075 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" (UID: "cfa68ec3-9d3b-4584-a25c-e7682bfda2f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.251201 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "627735ca-7a3a-436c-a3fe-5634fc742384" (UID: "627735ca-7a3a-436c-a3fe-5634fc742384"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.251720 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f16b1b94-6103-4f26-b71f-070ea624c017" (UID: "f16b1b94-6103-4f26-b71f-070ea624c017"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.254087 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47" (OuterVolumeSpecName: "kube-api-access-4jx47") pod "f16b1b94-6103-4f26-b71f-070ea624c017" (UID: "f16b1b94-6103-4f26-b71f-070ea624c017"). InnerVolumeSpecName "kube-api-access-4jx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.255184 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2" (OuterVolumeSpecName: "kube-api-access-vblg2") pod "cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" (UID: "cfa68ec3-9d3b-4584-a25c-e7682bfda2f9"). InnerVolumeSpecName "kube-api-access-vblg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.266839 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx" (OuterVolumeSpecName: "kube-api-access-l78qx") pod "faa2e119-517e-47d8-b26e-424f2de5df1f" (UID: "faa2e119-517e-47d8-b26e-424f2de5df1f"). InnerVolumeSpecName "kube-api-access-l78qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.281918 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp" (OuterVolumeSpecName: "kube-api-access-p7gvp") pod "abe41022-a923-4089-9328-25839bc6bc7e" (UID: "abe41022-a923-4089-9328-25839bc6bc7e"). InnerVolumeSpecName "kube-api-access-p7gvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.282840 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz" (OuterVolumeSpecName: "kube-api-access-gqzrz") pod "627735ca-7a3a-436c-a3fe-5634fc742384" (UID: "627735ca-7a3a-436c-a3fe-5634fc742384"). InnerVolumeSpecName "kube-api-access-gqzrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.306990 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46" (OuterVolumeSpecName: "kube-api-access-phs46") pod "e7a69683-ae94-4df8-bb58-83dd08d62052" (UID: "e7a69683-ae94-4df8-bb58-83dd08d62052"). InnerVolumeSpecName "kube-api-access-phs46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353010 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts\") pod \"db906256-70e6-4b0b-b691-9e8958a9ae3b\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353086 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts\") pod \"c9bf55ae-99a9-403d-951d-51085bd87019\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353179 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbglk\" (UniqueName: \"kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk\") pod \"db906256-70e6-4b0b-b691-9e8958a9ae3b\" (UID: \"db906256-70e6-4b0b-b691-9e8958a9ae3b\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353228 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qznx\" (UniqueName: \"kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx\") pod \"c9bf55ae-99a9-403d-951d-51085bd87019\" (UID: \"c9bf55ae-99a9-403d-951d-51085bd87019\") " Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353929 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f16b1b94-6103-4f26-b71f-070ea624c017-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353950 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqzrz\" (UniqueName: \"kubernetes.io/projected/627735ca-7a3a-436c-a3fe-5634fc742384-kube-api-access-gqzrz\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353963 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vblg2\" (UniqueName: \"kubernetes.io/projected/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-kube-api-access-vblg2\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353975 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7a69683-ae94-4df8-bb58-83dd08d62052-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353986 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.353997 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/627735ca-7a3a-436c-a3fe-5634fc742384-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354009 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gvp\" (UniqueName: \"kubernetes.io/projected/abe41022-a923-4089-9328-25839bc6bc7e-kube-api-access-p7gvp\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354020 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa2e119-517e-47d8-b26e-424f2de5df1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354033 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abe41022-a923-4089-9328-25839bc6bc7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354047 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jx47\" (UniqueName: \"kubernetes.io/projected/f16b1b94-6103-4f26-b71f-070ea624c017-kube-api-access-4jx47\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354059 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phs46\" (UniqueName: \"kubernetes.io/projected/e7a69683-ae94-4df8-bb58-83dd08d62052-kube-api-access-phs46\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.354070 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l78qx\" (UniqueName: \"kubernetes.io/projected/faa2e119-517e-47d8-b26e-424f2de5df1f-kube-api-access-l78qx\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.358923 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db906256-70e6-4b0b-b691-9e8958a9ae3b" (UID: "db906256-70e6-4b0b-b691-9e8958a9ae3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.359843 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9bf55ae-99a9-403d-951d-51085bd87019" (UID: "c9bf55ae-99a9-403d-951d-51085bd87019"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.363158 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx" (OuterVolumeSpecName: "kube-api-access-2qznx") pod "c9bf55ae-99a9-403d-951d-51085bd87019" (UID: "c9bf55ae-99a9-403d-951d-51085bd87019"). InnerVolumeSpecName "kube-api-access-2qznx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.365861 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk" (OuterVolumeSpecName: "kube-api-access-pbglk") pod "db906256-70e6-4b0b-b691-9e8958a9ae3b" (UID: "db906256-70e6-4b0b-b691-9e8958a9ae3b"). InnerVolumeSpecName "kube-api-access-pbglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.455532 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db906256-70e6-4b0b-b691-9e8958a9ae3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.455821 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bf55ae-99a9-403d-951d-51085bd87019-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.455832 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbglk\" (UniqueName: \"kubernetes.io/projected/db906256-70e6-4b0b-b691-9e8958a9ae3b-kube-api-access-pbglk\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.455842 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qznx\" (UniqueName: \"kubernetes.io/projected/c9bf55ae-99a9-403d-951d-51085bd87019-kube-api-access-2qznx\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.486065 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ksf8v" event={"ID":"faa2e119-517e-47d8-b26e-424f2de5df1f","Type":"ContainerDied","Data":"7790521603ff29c1ffa05e3b69545f7bd759e09d41b17d2142ec9ed3f326aaa1"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.486117 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7790521603ff29c1ffa05e3b69545f7bd759e09d41b17d2142ec9ed3f326aaa1" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.486191 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ksf8v" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.515873 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6f58-account-create-update-w9947" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.515871 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6f58-account-create-update-w9947" event={"ID":"c9bf55ae-99a9-403d-951d-51085bd87019","Type":"ContainerDied","Data":"b6b14bcb4df01f67ff91d25c4d58ece1164e8009968a570b947990a7e71cdd27"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.516634 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b14bcb4df01f67ff91d25c4d58ece1164e8009968a570b947990a7e71cdd27" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.529345 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf0-account-create-update-2mczz" event={"ID":"627735ca-7a3a-436c-a3fe-5634fc742384","Type":"ContainerDied","Data":"89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.529415 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89fd0d1ad51198d282d5d578f8b068c18654f8913b675cacb61f495359dcfbe8" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.529541 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf0-account-create-update-2mczz" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.549977 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jbq9k" event={"ID":"cfa68ec3-9d3b-4584-a25c-e7682bfda2f9","Type":"ContainerDied","Data":"fd0bdfda6a9a4de59b8c002f3ba51fd864021b8fe877bc63a07a01c8de5b14dd"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.550030 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0bdfda6a9a4de59b8c002f3ba51fd864021b8fe877bc63a07a01c8de5b14dd" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.550117 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jbq9k" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.555371 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-61e0-account-create-update-5kcft" event={"ID":"e7a69683-ae94-4df8-bb58-83dd08d62052","Type":"ContainerDied","Data":"93376b9152cddbdbd7c0f92b717d4c646e844c8fb867928811dbaf765f9af1b0"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.555404 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93376b9152cddbdbd7c0f92b717d4c646e844c8fb867928811dbaf765f9af1b0" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.555476 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-61e0-account-create-update-5kcft" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.569085 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f638-account-create-update-7nblt" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.569191 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f638-account-create-update-7nblt" event={"ID":"f16b1b94-6103-4f26-b71f-070ea624c017","Type":"ContainerDied","Data":"70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.569962 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70af83b1579958407ddca2f68d33701494b0d9814d74303c7974436d1fdaaa9d" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.575713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6czxg" event={"ID":"38a9be81-a6ab-4b04-9796-2f923678d8a9","Type":"ContainerStarted","Data":"9ea15688e5247aa9e527dc474f396b579bce378beb95f2293789e4b7285b35de"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.586690 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jnwlw" event={"ID":"abe41022-a923-4089-9328-25839bc6bc7e","Type":"ContainerDied","Data":"0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.586744 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b02345beb09567ad54efb0f5dab6240e115c25ba2c68c342229aa68bf802745" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.586839 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jnwlw" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.594851 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ztrfr" event={"ID":"db906256-70e6-4b0b-b691-9e8958a9ae3b","Type":"ContainerDied","Data":"45bcaed10d322ae36647db5b68c588678c520733e1b6633cb1a1fd4d0d33c3fe"} Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.595118 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bcaed10d322ae36647db5b68c588678c520733e1b6633cb1a1fd4d0d33c3fe" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.594924 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ztrfr" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.645088 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6czxg" podStartSLOduration=3.747081482 podStartE2EDuration="9.645064603s" podCreationTimestamp="2026-01-26 15:59:28 +0000 UTC" firstStartedPulling="2026-01-26 15:59:30.940871459 +0000 UTC m=+1528.722751852" lastFinishedPulling="2026-01-26 15:59:36.83885458 +0000 UTC m=+1534.620734973" observedRunningTime="2026-01-26 15:59:37.617919153 +0000 UTC m=+1535.399799546" watchObservedRunningTime="2026-01-26 15:59:37.645064603 +0000 UTC m=+1535.426944996" Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.669339 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.671471 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="prometheus" containerID="cri-o://680dda39973abc218473cca4b0209e5c1e737832a0bf4e4da361bdc98226e138" gracePeriod=600 Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.672110 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="thanos-sidecar" containerID="cri-o://f09f5505c91c9d9d896b62cf5b1e9084a18205622f6e34c6acc7edb70056d0f2" gracePeriod=600 Jan 26 15:59:37 crc kubenswrapper[4896]: I0126 15:59:37.672181 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="config-reloader" containerID="cri-o://a9a918770416c56498d3b8e76b09711dd0ce6557786f7a028c2f04b1e732a161" gracePeriod=600 Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.063228 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hmlwb"] Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.063964 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.063980 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.063995 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bf55ae-99a9-403d-951d-51085bd87019" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064001 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bf55ae-99a9-403d-951d-51085bd87019" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064014 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db906256-70e6-4b0b-b691-9e8958a9ae3b" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064019 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="db906256-70e6-4b0b-b691-9e8958a9ae3b" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064038 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627735ca-7a3a-436c-a3fe-5634fc742384" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064044 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="627735ca-7a3a-436c-a3fe-5634fc742384" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064064 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe41022-a923-4089-9328-25839bc6bc7e" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064070 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe41022-a923-4089-9328-25839bc6bc7e" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064079 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa2e119-517e-47d8-b26e-424f2de5df1f" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064085 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa2e119-517e-47d8-b26e-424f2de5df1f" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064098 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a69683-ae94-4df8-bb58-83dd08d62052" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064103 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a69683-ae94-4df8-bb58-83dd08d62052" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064111 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16b1b94-6103-4f26-b71f-070ea624c017" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064116 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16b1b94-6103-4f26-b71f-070ea624c017" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: E0126 15:59:38.064127 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be6fec9-0b86-44e6-b6fa-38cda329656a" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064132 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be6fec9-0b86-44e6-b6fa-38cda329656a" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064324 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064334 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f16b1b94-6103-4f26-b71f-070ea624c017" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064348 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa2e119-517e-47d8-b26e-424f2de5df1f" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064361 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="db906256-70e6-4b0b-b691-9e8958a9ae3b" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064367 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf55ae-99a9-403d-951d-51085bd87019" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064375 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe41022-a923-4089-9328-25839bc6bc7e" containerName="mariadb-database-create" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064382 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a69683-ae94-4df8-bb58-83dd08d62052" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064399 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="627735ca-7a3a-436c-a3fe-5634fc742384" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.064407 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be6fec9-0b86-44e6-b6fa-38cda329656a" containerName="mariadb-account-create-update" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.065473 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.071012 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.076209 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlwb"] Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.183425 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.183559 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.285049 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.285234 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.286113 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.314365 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89\") pod \"root-account-create-update-hmlwb\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.399560 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.650345 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-c6rlw" event={"ID":"1ec7e263-3178-47c9-934b-7e0f4d72aec7","Type":"ContainerStarted","Data":"623c709324027da09940de6723172759564f011b2d492f913cc3c8db905c5918"} Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.683113 4896 generic.go:334] "Generic (PLEG): container finished" podID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerID="f09f5505c91c9d9d896b62cf5b1e9084a18205622f6e34c6acc7edb70056d0f2" exitCode=0 Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.683161 4896 generic.go:334] "Generic (PLEG): container finished" podID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerID="a9a918770416c56498d3b8e76b09711dd0ce6557786f7a028c2f04b1e732a161" exitCode=0 Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.683171 4896 generic.go:334] "Generic (PLEG): container finished" podID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerID="680dda39973abc218473cca4b0209e5c1e737832a0bf4e4da361bdc98226e138" exitCode=0 Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.687255 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerDied","Data":"f09f5505c91c9d9d896b62cf5b1e9084a18205622f6e34c6acc7edb70056d0f2"} Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.687368 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerDied","Data":"a9a918770416c56498d3b8e76b09711dd0ce6557786f7a028c2f04b1e732a161"} Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.687380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerDied","Data":"680dda39973abc218473cca4b0209e5c1e737832a0bf4e4da361bdc98226e138"} Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.692041 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-c6rlw" podStartSLOduration=3.533836267 podStartE2EDuration="42.692014021s" podCreationTimestamp="2026-01-26 15:58:56 +0000 UTC" firstStartedPulling="2026-01-26 15:58:57.682782669 +0000 UTC m=+1495.464663062" lastFinishedPulling="2026-01-26 15:59:36.840960423 +0000 UTC m=+1534.622840816" observedRunningTime="2026-01-26 15:59:38.687209573 +0000 UTC m=+1536.469089956" watchObservedRunningTime="2026-01-26 15:59:38.692014021 +0000 UTC m=+1536.473894404" Jan 26 15:59:38 crc kubenswrapper[4896]: I0126 15:59:38.933443 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlwb"] Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.356408 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.513774 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq79t\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.513861 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514002 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514129 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514229 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514271 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514479 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514539 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514562 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514711 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.514755 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file\") pod \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\" (UID: \"2a70b903-4311-4bcf-833a-d9fdd2ab5d24\") " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.515467 4896 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.516659 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.522800 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.523722 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out" (OuterVolumeSpecName: "config-out") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.524118 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config" (OuterVolumeSpecName: "config") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.525536 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t" (OuterVolumeSpecName: "kube-api-access-sq79t") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "kube-api-access-sq79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.530342 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.533211 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.546633 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.567979 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config" (OuterVolumeSpecName: "web-config") pod "2a70b903-4311-4bcf-833a-d9fdd2ab5d24" (UID: "2a70b903-4311-4bcf-833a-d9fdd2ab5d24"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618743 4896 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618783 4896 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618792 4896 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618826 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") on node \"crc\" " Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618837 4896 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618847 4896 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618858 4896 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618868 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq79t\" (UniqueName: \"kubernetes.io/projected/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-kube-api-access-sq79t\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.618876 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a70b903-4311-4bcf-833a-d9fdd2ab5d24-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.664954 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.665148 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e") on node "crc" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.702020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a70b903-4311-4bcf-833a-d9fdd2ab5d24","Type":"ContainerDied","Data":"5ad692ee5d11f75d37aba10777586e86b14b770c84deb492c3f52463a0d6bccc"} Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.702075 4896 scope.go:117] "RemoveContainer" containerID="f09f5505c91c9d9d896b62cf5b1e9084a18205622f6e34c6acc7edb70056d0f2" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.702227 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.708849 4896 generic.go:334] "Generic (PLEG): container finished" podID="085b4b18-0a01-4e7b-9959-fd2dec1b9da5" containerID="38d6290f1f3c67badf8db1ac0706222e68bff33ff81d99ce0da725a82647b9ff" exitCode=0 Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.708898 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwb" event={"ID":"085b4b18-0a01-4e7b-9959-fd2dec1b9da5","Type":"ContainerDied","Data":"38d6290f1f3c67badf8db1ac0706222e68bff33ff81d99ce0da725a82647b9ff"} Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.708929 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwb" event={"ID":"085b4b18-0a01-4e7b-9959-fd2dec1b9da5","Type":"ContainerStarted","Data":"26abacb95cc662bd78e934fc926ae4c07fd2e0c6f1a540e7b47d6dbf25d304d3"} Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.720950 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.731943 4896 scope.go:117] "RemoveContainer" containerID="a9a918770416c56498d3b8e76b09711dd0ce6557786f7a028c2f04b1e732a161" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.854804 4896 scope.go:117] "RemoveContainer" containerID="680dda39973abc218473cca4b0209e5c1e737832a0bf4e4da361bdc98226e138" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.870881 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.884090 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.903821 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:39 crc kubenswrapper[4896]: E0126 15:59:39.904433 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="init-config-reloader" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904464 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="init-config-reloader" Jan 26 15:59:39 crc kubenswrapper[4896]: E0126 15:59:39.904495 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="thanos-sidecar" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904507 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="thanos-sidecar" Jan 26 15:59:39 crc kubenswrapper[4896]: E0126 15:59:39.904526 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="config-reloader" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904537 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="config-reloader" Jan 26 15:59:39 crc kubenswrapper[4896]: E0126 15:59:39.904561 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="prometheus" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904569 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="prometheus" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904872 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="prometheus" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904900 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="thanos-sidecar" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.904929 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" containerName="config-reloader" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.907497 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.907823 4896 scope.go:117] "RemoveContainer" containerID="cba41fd06ce57f906fe4acca16da8d2d6dde54a1f486dd87a9a6a8b54bd1526b" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.911513 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.911516 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.913361 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.913543 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.914396 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.914615 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.916195 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-65lrl" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.916212 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.922056 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 15:59:39 crc kubenswrapper[4896]: I0126 15:59:39.923075 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.056792 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.056846 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.056902 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.056927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.056986 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057029 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvt7\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-kube-api-access-7mvt7\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057068 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057093 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057114 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057134 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057164 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057187 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.057203 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159591 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159644 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159718 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159747 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvt7\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-kube-api-access-7mvt7\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159792 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159817 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159838 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159857 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159898 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159921 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.159939 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.160029 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.160053 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.161461 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.161647 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.162896 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/15b95f90-b75a-43ab-9c54-acd4c3e658ab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.164858 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.164924 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.165238 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/15b95f90-b75a-43ab-9c54-acd4c3e658ab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.165264 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.165795 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.165844 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d76c98445a8aeb600bd17b48c80d5d093689356561d52f83a0ea51fc24e48e6c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.167692 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.170177 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.170601 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.177866 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/15b95f90-b75a-43ab-9c54-acd4c3e658ab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.183765 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvt7\" (UniqueName: \"kubernetes.io/projected/15b95f90-b75a-43ab-9c54-acd4c3e658ab-kube-api-access-7mvt7\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.224474 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2fe3af4f-2d2a-4343-9def-98c2f120b83e\") pod \"prometheus-metric-storage-0\" (UID: \"15b95f90-b75a-43ab-9c54-acd4c3e658ab\") " pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.283371 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.787842 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a70b903-4311-4bcf-833a-d9fdd2ab5d24" path="/var/lib/kubelet/pods/2a70b903-4311-4bcf-833a-d9fdd2ab5d24/volumes" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.789498 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.789590 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.870998 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:59:40 crc kubenswrapper[4896]: I0126 15:59:40.871231 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hjhkk" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="dnsmasq-dns" containerID="cri-o://0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e" gracePeriod=10 Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.113919 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.190153 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts\") pod \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.190388 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89\") pod \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\" (UID: \"085b4b18-0a01-4e7b-9959-fd2dec1b9da5\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.190675 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "085b4b18-0a01-4e7b-9959-fd2dec1b9da5" (UID: "085b4b18-0a01-4e7b-9959-fd2dec1b9da5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.191291 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.196920 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89" (OuterVolumeSpecName: "kube-api-access-xtx89") pod "085b4b18-0a01-4e7b-9959-fd2dec1b9da5" (UID: "085b4b18-0a01-4e7b-9959-fd2dec1b9da5"). InnerVolumeSpecName "kube-api-access-xtx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.295132 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtx89\" (UniqueName: \"kubernetes.io/projected/085b4b18-0a01-4e7b-9959-fd2dec1b9da5-kube-api-access-xtx89\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.364707 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.499402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc\") pod \"bf2859fd-5b7b-45fa-ae36-18244c995e05\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.499714 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config\") pod \"bf2859fd-5b7b-45fa-ae36-18244c995e05\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.499852 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb\") pod \"bf2859fd-5b7b-45fa-ae36-18244c995e05\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.500034 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sc6n\" (UniqueName: \"kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n\") pod \"bf2859fd-5b7b-45fa-ae36-18244c995e05\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.500258 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb\") pod \"bf2859fd-5b7b-45fa-ae36-18244c995e05\" (UID: \"bf2859fd-5b7b-45fa-ae36-18244c995e05\") " Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.503715 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n" (OuterVolumeSpecName: "kube-api-access-4sc6n") pod "bf2859fd-5b7b-45fa-ae36-18244c995e05" (UID: "bf2859fd-5b7b-45fa-ae36-18244c995e05"). InnerVolumeSpecName "kube-api-access-4sc6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.569307 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config" (OuterVolumeSpecName: "config") pod "bf2859fd-5b7b-45fa-ae36-18244c995e05" (UID: "bf2859fd-5b7b-45fa-ae36-18244c995e05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.574213 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf2859fd-5b7b-45fa-ae36-18244c995e05" (UID: "bf2859fd-5b7b-45fa-ae36-18244c995e05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.574568 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf2859fd-5b7b-45fa-ae36-18244c995e05" (UID: "bf2859fd-5b7b-45fa-ae36-18244c995e05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.577803 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf2859fd-5b7b-45fa-ae36-18244c995e05" (UID: "bf2859fd-5b7b-45fa-ae36-18244c995e05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.603994 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sc6n\" (UniqueName: \"kubernetes.io/projected/bf2859fd-5b7b-45fa-ae36-18244c995e05-kube-api-access-4sc6n\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.604045 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.604059 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.604070 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.604086 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf2859fd-5b7b-45fa-ae36-18244c995e05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.733224 4896 generic.go:334] "Generic (PLEG): container finished" podID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerID="0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e" exitCode=0 Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.733301 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hjhkk" event={"ID":"bf2859fd-5b7b-45fa-ae36-18244c995e05","Type":"ContainerDied","Data":"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e"} Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.733330 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hjhkk" event={"ID":"bf2859fd-5b7b-45fa-ae36-18244c995e05","Type":"ContainerDied","Data":"fc191178ee55b9959a0d8ce035dd31e6e14afdd8fd4e4f4f95158825be5fe051"} Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.733354 4896 scope.go:117] "RemoveContainer" containerID="0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.733633 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hjhkk" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.735117 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwb" event={"ID":"085b4b18-0a01-4e7b-9959-fd2dec1b9da5","Type":"ContainerDied","Data":"26abacb95cc662bd78e934fc926ae4c07fd2e0c6f1a540e7b47d6dbf25d304d3"} Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.735145 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26abacb95cc662bd78e934fc926ae4c07fd2e0c6f1a540e7b47d6dbf25d304d3" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.735198 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwb" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.742664 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerStarted","Data":"71be2fc4cff4beeb1cb921d5e07e4ebac16d78c302dba8f30c34394fd7493167"} Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.795879 4896 scope.go:117] "RemoveContainer" containerID="26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.813381 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.832808 4896 scope.go:117] "RemoveContainer" containerID="0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e" Jan 26 15:59:41 crc kubenswrapper[4896]: E0126 15:59:41.833403 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e\": container with ID starting with 0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e not found: ID does not exist" containerID="0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.833455 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e"} err="failed to get container status \"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e\": rpc error: code = NotFound desc = could not find container \"0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e\": container with ID starting with 0587f3f932f5e6790fa44a5fe0230b12541270579c288c66dcb316cf0a43504e not found: ID does not exist" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.833484 4896 scope.go:117] "RemoveContainer" containerID="26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a" Jan 26 15:59:41 crc kubenswrapper[4896]: E0126 15:59:41.833732 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a\": container with ID starting with 26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a not found: ID does not exist" containerID="26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.833757 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a"} err="failed to get container status \"26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a\": rpc error: code = NotFound desc = could not find container \"26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a\": container with ID starting with 26df70834d4bdb4bd42720d5b561f1b2b3417cc8738d5fbdc22d2c29204f2e5a not found: ID does not exist" Jan 26 15:59:41 crc kubenswrapper[4896]: I0126 15:59:41.833808 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hjhkk"] Jan 26 15:59:42 crc kubenswrapper[4896]: I0126 15:59:42.787518 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" path="/var/lib/kubelet/pods/bf2859fd-5b7b-45fa-ae36-18244c995e05/volumes" Jan 26 15:59:43 crc kubenswrapper[4896]: I0126 15:59:43.769491 4896 generic.go:334] "Generic (PLEG): container finished" podID="38a9be81-a6ab-4b04-9796-2f923678d8a9" containerID="9ea15688e5247aa9e527dc474f396b579bce378beb95f2293789e4b7285b35de" exitCode=0 Jan 26 15:59:43 crc kubenswrapper[4896]: I0126 15:59:43.769535 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6czxg" event={"ID":"38a9be81-a6ab-4b04-9796-2f923678d8a9","Type":"ContainerDied","Data":"9ea15688e5247aa9e527dc474f396b579bce378beb95f2293789e4b7285b35de"} Jan 26 15:59:44 crc kubenswrapper[4896]: I0126 15:59:44.566206 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hmlwb"] Jan 26 15:59:44 crc kubenswrapper[4896]: I0126 15:59:44.575863 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hmlwb"] Jan 26 15:59:44 crc kubenswrapper[4896]: I0126 15:59:44.771405 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085b4b18-0a01-4e7b-9959-fd2dec1b9da5" path="/var/lib/kubelet/pods/085b4b18-0a01-4e7b-9959-fd2dec1b9da5/volumes" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.321940 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.446287 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle\") pod \"38a9be81-a6ab-4b04-9796-2f923678d8a9\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.446403 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx9mr\" (UniqueName: \"kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr\") pod \"38a9be81-a6ab-4b04-9796-2f923678d8a9\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.446537 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data\") pod \"38a9be81-a6ab-4b04-9796-2f923678d8a9\" (UID: \"38a9be81-a6ab-4b04-9796-2f923678d8a9\") " Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.453264 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr" (OuterVolumeSpecName: "kube-api-access-cx9mr") pod "38a9be81-a6ab-4b04-9796-2f923678d8a9" (UID: "38a9be81-a6ab-4b04-9796-2f923678d8a9"). InnerVolumeSpecName "kube-api-access-cx9mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.485539 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a9be81-a6ab-4b04-9796-2f923678d8a9" (UID: "38a9be81-a6ab-4b04-9796-2f923678d8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.502113 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data" (OuterVolumeSpecName: "config-data") pod "38a9be81-a6ab-4b04-9796-2f923678d8a9" (UID: "38a9be81-a6ab-4b04-9796-2f923678d8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.548980 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.549207 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx9mr\" (UniqueName: \"kubernetes.io/projected/38a9be81-a6ab-4b04-9796-2f923678d8a9-kube-api-access-cx9mr\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.549273 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9be81-a6ab-4b04-9796-2f923678d8a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.796291 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6czxg" event={"ID":"38a9be81-a6ab-4b04-9796-2f923678d8a9","Type":"ContainerDied","Data":"1770bf4fc2eeee2c5a45b66ffd9146c7832e8d45aaa462f3d2ac9a676d4edc71"} Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.796766 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1770bf4fc2eeee2c5a45b66ffd9146c7832e8d45aaa462f3d2ac9a676d4edc71" Jan 26 15:59:45 crc kubenswrapper[4896]: I0126 15:59:45.796519 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6czxg" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.248227 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:46 crc kubenswrapper[4896]: E0126 15:59:46.249005 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085b4b18-0a01-4e7b-9959-fd2dec1b9da5" containerName="mariadb-account-create-update" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.249124 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="085b4b18-0a01-4e7b-9959-fd2dec1b9da5" containerName="mariadb-account-create-update" Jan 26 15:59:46 crc kubenswrapper[4896]: E0126 15:59:46.260700 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="dnsmasq-dns" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.260951 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="dnsmasq-dns" Jan 26 15:59:46 crc kubenswrapper[4896]: E0126 15:59:46.261057 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a9be81-a6ab-4b04-9796-2f923678d8a9" containerName="keystone-db-sync" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.261151 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a9be81-a6ab-4b04-9796-2f923678d8a9" containerName="keystone-db-sync" Jan 26 15:59:46 crc kubenswrapper[4896]: E0126 15:59:46.261288 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="init" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.261400 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="init" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.262034 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a9be81-a6ab-4b04-9796-2f923678d8a9" containerName="keystone-db-sync" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.262144 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="085b4b18-0a01-4e7b-9959-fd2dec1b9da5" containerName="mariadb-account-create-update" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.262239 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2859fd-5b7b-45fa-ae36-18244c995e05" containerName="dnsmasq-dns" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.263959 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.270290 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tlrkc"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.272184 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.290895 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.291228 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fkw6" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.291313 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.291539 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.292935 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.294465 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.323855 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tlrkc"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338510 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338634 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338701 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338722 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338749 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338777 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338814 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338901 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvqc\" (UniqueName: \"kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.338962 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.339005 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.339031 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.339098 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cns8g\" (UniqueName: \"kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.349019 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7pf4j"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.375139 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.384744 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-m6pjw" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.404463 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.442652 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.442744 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6x8\" (UniqueName: \"kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443209 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443235 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443298 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443323 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443353 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443380 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443424 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443479 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvqc\" (UniqueName: \"kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443546 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.444920 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.445202 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.443797 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.448311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.448460 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.449042 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.449550 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.452994 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cns8g\" (UniqueName: \"kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.453137 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.457319 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.470713 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.470842 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.478678 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.479988 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.483009 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7pf4j"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.501383 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cns8g\" (UniqueName: \"kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g\") pod \"keystone-bootstrap-tlrkc\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.518220 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvqc\" (UniqueName: \"kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc\") pod \"dnsmasq-dns-5959f8865f-pp9hz\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.555845 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6x8\" (UniqueName: \"kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.555927 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.556177 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.561853 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-l5784"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.564836 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.567815 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.568863 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.569327 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6tnp9" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.572500 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.590169 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.610239 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.613836 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6x8\" (UniqueName: \"kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8\") pod \"heat-db-sync-7pf4j\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.629685 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-94m9x"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.630518 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlrkc" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.633528 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.639992 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.640176 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.640368 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dvx2m" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662358 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662436 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662513 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662574 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662627 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662665 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662743 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662826 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.662865 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rv9j\" (UniqueName: \"kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.714732 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l5784"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.715929 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7pf4j" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877122 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877180 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877222 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877297 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877381 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877422 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rv9j\" (UniqueName: \"kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877554 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877624 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.877716 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.878062 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.891002 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-94m9x"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.897647 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.898451 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.898650 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.899383 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.901474 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.911342 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.916743 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.942665 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rd44b"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.944199 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.949988 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.950905 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r8lzz" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.950930 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.951893 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rd44b"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.953853 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5\") pod \"cinder-db-sync-l5784\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.973503 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.975426 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 15:59:46 crc kubenswrapper[4896]: I0126 15:59:46.975520 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.012707 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rv9j\" (UniqueName: \"kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j\") pod \"neutron-db-sync-94m9x\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.017877 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rv5xj"] Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.026478 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.031383 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rv5xj"] Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.036451 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5z65h" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.039009 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.114512 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.114940 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.115015 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138676 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138726 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138768 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7x9\" (UniqueName: \"kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138788 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138809 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn62w\" (UniqueName: \"kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138838 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138887 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.138973 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.139011 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khhf\" (UniqueName: \"kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.139065 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.139099 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.227948 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l5784" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.243122 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.243267 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244662 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244716 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244822 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx7x9\" (UniqueName: \"kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244853 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn62w\" (UniqueName: \"kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244892 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.244947 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.245090 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.245156 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khhf\" (UniqueName: \"kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.245205 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.245318 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.245423 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.248835 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.250336 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.251191 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.251602 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.255967 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.261130 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.263694 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.263900 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.268987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.272516 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khhf\" (UniqueName: \"kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.275225 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-94m9x" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.275885 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn62w\" (UniqueName: \"kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.281365 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle\") pod \"placement-db-sync-rd44b\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.287959 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx7x9\" (UniqueName: \"kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9\") pod \"dnsmasq-dns-58dd9ff6bc-z5rts\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.292804 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle\") pod \"barbican-db-sync-rv5xj\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.311814 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rd44b" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.336880 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:47 crc kubenswrapper[4896]: I0126 15:59:47.361866 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rv5xj" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.017105 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:48 crc kubenswrapper[4896]: W0126 15:59:48.037646 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06675205_e32d_4494_b3a2_79e3f111b266.slice/crio-796efd2735041d78a1167d1ed0a656b03a05a8d5262186483dfcf4ded046b00b WatchSource:0}: Error finding container 796efd2735041d78a1167d1ed0a656b03a05a8d5262186483dfcf4ded046b00b: Status 404 returned error can't find the container with id 796efd2735041d78a1167d1ed0a656b03a05a8d5262186483dfcf4ded046b00b Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.043194 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7pf4j"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.227779 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.291924 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.292080 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.305021 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.305334 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.369978 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tlrkc"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.393859 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.393927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8w9\" (UniqueName: \"kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.393965 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.394039 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.394085 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.394102 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.394140 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.494973 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c8w9\" (UniqueName: \"kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495051 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495118 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495172 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495195 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495238 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.495317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.508149 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.516567 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.652012 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.656041 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c8w9\" (UniqueName: \"kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.666251 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.666569 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.675331 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " pod="openstack/ceilometer-0" Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.735833 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l5784"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.750425 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-94m9x"] Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.944462 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7pf4j" event={"ID":"c8f4e140-bab4-479c-a97b-4a5aa49a47d3","Type":"ContainerStarted","Data":"96ab72afd67da2a10352899cd87127ecbbad094f95fc2a543f5d9326cb40a2f4"} Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.953531 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-94m9x" event={"ID":"bcb36972-ce84-471b-92b5-7be45e7e2d1a","Type":"ContainerStarted","Data":"3dcb6d886bdb78f4908c9771d13353e8efb2a79b7a738597ec4254c88146c128"} Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.960728 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlrkc" event={"ID":"e5aa66e6-719d-4031-b116-f0bbddf2f66d","Type":"ContainerStarted","Data":"1f293fa1c59782566037a286f0926225706454904b5711b8a336711ffaf6fe91"} Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.975406 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" event={"ID":"06675205-e32d-4494-b3a2-79e3f111b266","Type":"ContainerStarted","Data":"796efd2735041d78a1167d1ed0a656b03a05a8d5262186483dfcf4ded046b00b"} Jan 26 15:59:48 crc kubenswrapper[4896]: I0126 15:59:48.975844 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.361469 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rv5xj"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.381507 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rd44b"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.567018 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.693084 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vbx2q"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.695381 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.713353 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.774918 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbx2q"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.867010 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.867068 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnhg\" (UniqueName: \"kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.911177 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.969989 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.970052 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnhg\" (UniqueName: \"kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:49 crc kubenswrapper[4896]: I0126 15:59:49.971559 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.026898 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnhg\" (UniqueName: \"kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg\") pod \"root-account-create-update-vbx2q\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.072061 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l5784" event={"ID":"590e8b81-a793-4143-9b0e-f2afb348dd91","Type":"ContainerStarted","Data":"848c998855445957774fba16231abd3b3d98dfad9dcf3ae4e475db0fa24c6db9"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.079896 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlrkc" event={"ID":"e5aa66e6-719d-4031-b116-f0bbddf2f66d","Type":"ContainerStarted","Data":"457fd0f2ef23512434f704c800ba49efe9480908c29d2de0a9af1d9178f01f2d"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.089778 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rv5xj" event={"ID":"d0eef199-8f69-4f92-9435-ff0fd74dd854","Type":"ContainerStarted","Data":"83e91d164b07f80f9ebbffb67a2acea7eb76df60793d1b8e2638c2747f7e6366"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.101221 4896 generic.go:334] "Generic (PLEG): container finished" podID="06675205-e32d-4494-b3a2-79e3f111b266" containerID="27fa5547853ddc08135126e1348b7cfe31de7d4fac23625fac918886683eb885" exitCode=0 Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.101332 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" event={"ID":"06675205-e32d-4494-b3a2-79e3f111b266","Type":"ContainerDied","Data":"27fa5547853ddc08135126e1348b7cfe31de7d4fac23625fac918886683eb885"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.110191 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tlrkc" podStartSLOduration=4.110169774 podStartE2EDuration="4.110169774s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:50.107970369 +0000 UTC m=+1547.889850782" watchObservedRunningTime="2026-01-26 15:59:50.110169774 +0000 UTC m=+1547.892050167" Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.121781 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rd44b" event={"ID":"9b4a2eac-2950-4747-bc43-f287adafb4e2","Type":"ContainerStarted","Data":"1f19e71ee7a05bf84cda1c325e7400495cfe2c7d9411e1a2b172b1eaea11423e"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.124612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" event={"ID":"8f8cfa23-5804-4f61-815b-287e23958ff9","Type":"ContainerStarted","Data":"49b298b6a8226c6ee810934454585deee549f098966a2347c7b6e1367e2baa6e"} Jan 26 15:59:50 crc kubenswrapper[4896]: I0126 15:59:50.379668 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.304528 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.418371 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-94m9x" event={"ID":"bcb36972-ce84-471b-92b5-7be45e7e2d1a","Type":"ContainerStarted","Data":"2ff3fd8248bfa0e904da442fc579ca7cc674ffca199f97639e89b0121ecb2715"} Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.429036 4896 generic.go:334] "Generic (PLEG): container finished" podID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerID="c0b0a61f86a45917521f6171b4209be6939dd5cefa4287de07b35fbbd2fe8686" exitCode=0 Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.429153 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" event={"ID":"8f8cfa23-5804-4f61-815b-287e23958ff9","Type":"ContainerDied","Data":"c0b0a61f86a45917521f6171b4209be6939dd5cefa4287de07b35fbbd2fe8686"} Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.474650 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerStarted","Data":"4a8da1e23df670bf4e038b9aaeae350163374a3f1b32fd7e6f4f50d347118855"} Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.523564 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-94m9x" podStartSLOduration=5.523534002 podStartE2EDuration="5.523534002s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:51.449484674 +0000 UTC m=+1549.231365067" watchObservedRunningTime="2026-01-26 15:59:51.523534002 +0000 UTC m=+1549.305414395" Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.721175 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:51 crc kubenswrapper[4896]: I0126 15:59:51.749610 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vbx2q"] Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147401 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147560 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czvqc\" (UniqueName: \"kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147637 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147701 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147727 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.147837 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0\") pod \"06675205-e32d-4494-b3a2-79e3f111b266\" (UID: \"06675205-e32d-4494-b3a2-79e3f111b266\") " Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.176919 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc" (OuterVolumeSpecName: "kube-api-access-czvqc") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "kube-api-access-czvqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.253816 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czvqc\" (UniqueName: \"kubernetes.io/projected/06675205-e32d-4494-b3a2-79e3f111b266-kube-api-access-czvqc\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.277787 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.307234 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.318902 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config" (OuterVolumeSpecName: "config") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.345033 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.345130 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06675205-e32d-4494-b3a2-79e3f111b266" (UID: "06675205-e32d-4494-b3a2-79e3f111b266"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.357165 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.357192 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.357204 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.357212 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.357222 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06675205-e32d-4494-b3a2-79e3f111b266-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.516407 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" event={"ID":"06675205-e32d-4494-b3a2-79e3f111b266","Type":"ContainerDied","Data":"796efd2735041d78a1167d1ed0a656b03a05a8d5262186483dfcf4ded046b00b"} Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.516734 4896 scope.go:117] "RemoveContainer" containerID="27fa5547853ddc08135126e1348b7cfe31de7d4fac23625fac918886683eb885" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.516900 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-pp9hz" Jan 26 15:59:52 crc kubenswrapper[4896]: I0126 15:59:52.534993 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbx2q" event={"ID":"aab8ea78-3869-4a6c-a4e7-42d593d53756","Type":"ContainerStarted","Data":"6bfe5bee528a8e1cc658c6bc94dd4a309d58160ae07a0f968ef2b52ecc20b51d"} Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.015296 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.034061 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-pp9hz"] Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.744015 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbx2q" event={"ID":"aab8ea78-3869-4a6c-a4e7-42d593d53756","Type":"ContainerStarted","Data":"6e2ad292f379b86c689ec6d09168d47a103be93f5eb891fcf01c47fd95994d1a"} Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.763164 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerStarted","Data":"53b715befc2ba979051d1e68afe6109d3a654efdb51b9845cb429a412105266c"} Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.780182 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vbx2q" podStartSLOduration=4.780157187 podStartE2EDuration="4.780157187s" podCreationTimestamp="2026-01-26 15:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:53.770455668 +0000 UTC m=+1551.552336071" watchObservedRunningTime="2026-01-26 15:59:53.780157187 +0000 UTC m=+1551.562037580" Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.808114 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" event={"ID":"8f8cfa23-5804-4f61-815b-287e23958ff9","Type":"ContainerStarted","Data":"6229ba8dd1754a451776ad34befa5547df990edc017daacb5d87369e1b1f31fa"} Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.809330 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 15:59:53 crc kubenswrapper[4896]: I0126 15:59:53.852117 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" podStartSLOduration=7.852072983 podStartE2EDuration="7.852072983s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:59:53.850437773 +0000 UTC m=+1551.632318176" watchObservedRunningTime="2026-01-26 15:59:53.852072983 +0000 UTC m=+1551.633953376" Jan 26 15:59:54 crc kubenswrapper[4896]: I0126 15:59:54.801794 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06675205-e32d-4494-b3a2-79e3f111b266" path="/var/lib/kubelet/pods/06675205-e32d-4494-b3a2-79e3f111b266/volumes" Jan 26 15:59:54 crc kubenswrapper[4896]: I0126 15:59:54.859558 4896 generic.go:334] "Generic (PLEG): container finished" podID="aab8ea78-3869-4a6c-a4e7-42d593d53756" containerID="6e2ad292f379b86c689ec6d09168d47a103be93f5eb891fcf01c47fd95994d1a" exitCode=0 Jan 26 15:59:54 crc kubenswrapper[4896]: I0126 15:59:54.859881 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbx2q" event={"ID":"aab8ea78-3869-4a6c-a4e7-42d593d53756","Type":"ContainerDied","Data":"6e2ad292f379b86c689ec6d09168d47a103be93f5eb891fcf01c47fd95994d1a"} Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.072488 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbx2q" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.196018 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccnhg\" (UniqueName: \"kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg\") pod \"aab8ea78-3869-4a6c-a4e7-42d593d53756\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.196196 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts\") pod \"aab8ea78-3869-4a6c-a4e7-42d593d53756\" (UID: \"aab8ea78-3869-4a6c-a4e7-42d593d53756\") " Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.197692 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aab8ea78-3869-4a6c-a4e7-42d593d53756" (UID: "aab8ea78-3869-4a6c-a4e7-42d593d53756"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.213268 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg" (OuterVolumeSpecName: "kube-api-access-ccnhg") pod "aab8ea78-3869-4a6c-a4e7-42d593d53756" (UID: "aab8ea78-3869-4a6c-a4e7-42d593d53756"). InnerVolumeSpecName "kube-api-access-ccnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.301632 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab8ea78-3869-4a6c-a4e7-42d593d53756-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.302009 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccnhg\" (UniqueName: \"kubernetes.io/projected/aab8ea78-3869-4a6c-a4e7-42d593d53756-kube-api-access-ccnhg\") on node \"crc\" DevicePath \"\"" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.982321 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vbx2q" event={"ID":"aab8ea78-3869-4a6c-a4e7-42d593d53756","Type":"ContainerDied","Data":"6bfe5bee528a8e1cc658c6bc94dd4a309d58160ae07a0f968ef2b52ecc20b51d"} Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.982375 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bfe5bee528a8e1cc658c6bc94dd4a309d58160ae07a0f968ef2b52ecc20b51d" Jan 26 15:59:57 crc kubenswrapper[4896]: I0126 15:59:57.982465 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vbx2q" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.149672 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl"] Jan 26 16:00:00 crc kubenswrapper[4896]: E0126 16:00:00.150944 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06675205-e32d-4494-b3a2-79e3f111b266" containerName="init" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.150964 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="06675205-e32d-4494-b3a2-79e3f111b266" containerName="init" Jan 26 16:00:00 crc kubenswrapper[4896]: E0126 16:00:00.150992 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab8ea78-3869-4a6c-a4e7-42d593d53756" containerName="mariadb-account-create-update" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.151000 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab8ea78-3869-4a6c-a4e7-42d593d53756" containerName="mariadb-account-create-update" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.151306 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="06675205-e32d-4494-b3a2-79e3f111b266" containerName="init" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.151332 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab8ea78-3869-4a6c-a4e7-42d593d53756" containerName="mariadb-account-create-update" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.152502 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.155207 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.155345 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.164167 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl"] Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.307192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4zr2\" (UniqueName: \"kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.307312 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.309217 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.739252 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.739331 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.739591 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4zr2\" (UniqueName: \"kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.740912 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.745936 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.783921 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4zr2\" (UniqueName: \"kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2\") pod \"collect-profiles-29490720-2dvdl\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:00 crc kubenswrapper[4896]: I0126 16:00:00.813842 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:01 crc kubenswrapper[4896]: I0126 16:00:01.070527 4896 generic.go:334] "Generic (PLEG): container finished" podID="e5aa66e6-719d-4031-b116-f0bbddf2f66d" containerID="457fd0f2ef23512434f704c800ba49efe9480908c29d2de0a9af1d9178f01f2d" exitCode=0 Jan 26 16:00:01 crc kubenswrapper[4896]: I0126 16:00:01.070623 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlrkc" event={"ID":"e5aa66e6-719d-4031-b116-f0bbddf2f66d","Type":"ContainerDied","Data":"457fd0f2ef23512434f704c800ba49efe9480908c29d2de0a9af1d9178f01f2d"} Jan 26 16:00:02 crc kubenswrapper[4896]: I0126 16:00:02.339852 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 16:00:02 crc kubenswrapper[4896]: I0126 16:00:02.483553 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 16:00:02 crc kubenswrapper[4896]: I0126 16:00:02.483900 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" containerID="cri-o://e1f2c96705d92742d880dc7b7414f189b4bcec253ef20f538c8637e49b143af0" gracePeriod=10 Jan 26 16:00:03 crc kubenswrapper[4896]: I0126 16:00:03.106692 4896 generic.go:334] "Generic (PLEG): container finished" podID="2286da81-a82e-4026-a39f-1c712480bc2d" containerID="e1f2c96705d92742d880dc7b7414f189b4bcec253ef20f538c8637e49b143af0" exitCode=0 Jan 26 16:00:03 crc kubenswrapper[4896]: I0126 16:00:03.106766 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" event={"ID":"2286da81-a82e-4026-a39f-1c712480bc2d","Type":"ContainerDied","Data":"e1f2c96705d92742d880dc7b7414f189b4bcec253ef20f538c8637e49b143af0"} Jan 26 16:00:05 crc kubenswrapper[4896]: I0126 16:00:05.768052 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 26 16:00:09 crc kubenswrapper[4896]: I0126 16:00:09.206953 4896 generic.go:334] "Generic (PLEG): container finished" podID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" containerID="623c709324027da09940de6723172759564f011b2d492f913cc3c8db905c5918" exitCode=0 Jan 26 16:00:09 crc kubenswrapper[4896]: I0126 16:00:09.207466 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-c6rlw" event={"ID":"1ec7e263-3178-47c9-934b-7e0f4d72aec7","Type":"ContainerDied","Data":"623c709324027da09940de6723172759564f011b2d492f913cc3c8db905c5918"} Jan 26 16:00:10 crc kubenswrapper[4896]: I0126 16:00:10.765806 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.478524 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlrkc" event={"ID":"e5aa66e6-719d-4031-b116-f0bbddf2f66d","Type":"ContainerDied","Data":"1f293fa1c59782566037a286f0926225706454904b5711b8a336711ffaf6fe91"} Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.479039 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f293fa1c59782566037a286f0926225706454904b5711b8a336711ffaf6fe91" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.548351 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlrkc" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.746860 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.746920 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cns8g\" (UniqueName: \"kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.746971 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.747084 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.747143 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.747213 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts\") pod \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\" (UID: \"e5aa66e6-719d-4031-b116-f0bbddf2f66d\") " Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.756293 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g" (OuterVolumeSpecName: "kube-api-access-cns8g") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "kube-api-access-cns8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.766889 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.766905 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts" (OuterVolumeSpecName: "scripts") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.766943 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.797348 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data" (OuterVolumeSpecName: "config-data") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.846970 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5aa66e6-719d-4031-b116-f0bbddf2f66d" (UID: "e5aa66e6-719d-4031-b116-f0bbddf2f66d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850662 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850713 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cns8g\" (UniqueName: \"kubernetes.io/projected/e5aa66e6-719d-4031-b116-f0bbddf2f66d-kube-api-access-cns8g\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850733 4896 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850745 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850756 4896 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:15 crc kubenswrapper[4896]: I0126 16:00:15.850766 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5aa66e6-719d-4031-b116-f0bbddf2f66d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.489566 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlrkc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.697357 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tlrkc"] Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.709281 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tlrkc"] Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.776165 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5aa66e6-719d-4031-b116-f0bbddf2f66d" path="/var/lib/kubelet/pods/e5aa66e6-719d-4031-b116-f0bbddf2f66d/volumes" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.777405 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bhpbc"] Jan 26 16:00:16 crc kubenswrapper[4896]: E0126 16:00:16.777891 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5aa66e6-719d-4031-b116-f0bbddf2f66d" containerName="keystone-bootstrap" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.777916 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5aa66e6-719d-4031-b116-f0bbddf2f66d" containerName="keystone-bootstrap" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.778143 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5aa66e6-719d-4031-b116-f0bbddf2f66d" containerName="keystone-bootstrap" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.780386 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.783847 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.783928 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.784024 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fkw6" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.784105 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.784317 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.784927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.785099 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbr9f\" (UniqueName: \"kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.785289 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.785475 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.786064 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.786306 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.790015 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bhpbc"] Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.888722 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.888815 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.888885 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.889022 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.889064 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbr9f\" (UniqueName: \"kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.889136 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.894286 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.894348 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.894775 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.898938 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.900320 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:16 crc kubenswrapper[4896]: I0126 16:00:16.914268 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbr9f\" (UniqueName: \"kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f\") pod \"keystone-bootstrap-bhpbc\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:17 crc kubenswrapper[4896]: I0126 16:00:17.107258 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:18 crc kubenswrapper[4896]: E0126 16:00:18.515666 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 26 16:00:18 crc kubenswrapper[4896]: E0126 16:00:18.516323 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n665h6ch558h655hbbh684h56dh547hfdh7dh5b8h547h55fhfch699h74h5c5h9fh5b6h7fh594h56bh569h78h58h56ch587h689h8fh5d5h64dh55dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c8w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(41bebe25-46fb-4c06-9977-e39a32407c42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:00:20 crc kubenswrapper[4896]: I0126 16:00:20.765497 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 26 16:00:20 crc kubenswrapper[4896]: I0126 16:00:20.783659 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 16:00:25 crc kubenswrapper[4896]: I0126 16:00:25.766369 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 26 16:00:28 crc kubenswrapper[4896]: I0126 16:00:28.684940 4896 generic.go:334] "Generic (PLEG): container finished" podID="bcb36972-ce84-471b-92b5-7be45e7e2d1a" containerID="2ff3fd8248bfa0e904da442fc579ca7cc674ffca199f97639e89b0121ecb2715" exitCode=0 Jan 26 16:00:28 crc kubenswrapper[4896]: I0126 16:00:28.685460 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-94m9x" event={"ID":"bcb36972-ce84-471b-92b5-7be45e7e2d1a","Type":"ContainerDied","Data":"2ff3fd8248bfa0e904da442fc579ca7cc674ffca199f97639e89b0121ecb2715"} Jan 26 16:00:29 crc kubenswrapper[4896]: E0126 16:00:29.562792 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 26 16:00:29 crc kubenswrapper[4896]: E0126 16:00:29.563334 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr6x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7pf4j_openstack(c8f4e140-bab4-479c-a97b-4a5aa49a47d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:00:29 crc kubenswrapper[4896]: E0126 16:00:29.565875 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7pf4j" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.686812 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-c6rlw" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.695498 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.701600 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" event={"ID":"2286da81-a82e-4026-a39f-1c712480bc2d","Type":"ContainerDied","Data":"ec544b94e368561de4cf20ec21f4fbf7f4193149cf376d24676afccfe00e6930"} Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.701691 4896 scope.go:117] "RemoveContainer" containerID="e1f2c96705d92742d880dc7b7414f189b4bcec253ef20f538c8637e49b143af0" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.701848 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.707104 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-c6rlw" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.707368 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-c6rlw" event={"ID":"1ec7e263-3178-47c9-934b-7e0f4d72aec7","Type":"ContainerDied","Data":"15c364a3a9d0b6f5a90cf40801a4f1ef21d6cffc52d5e961e3514915ee2d8930"} Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.707413 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c364a3a9d0b6f5a90cf40801a4f1ef21d6cffc52d5e961e3514915ee2d8930" Jan 26 16:00:29 crc kubenswrapper[4896]: E0126 16:00:29.714296 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-7pf4j" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816120 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k52n8\" (UniqueName: \"kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8\") pod \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816423 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data\") pod \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816512 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data\") pod \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816541 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816588 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816608 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816643 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816669 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle\") pod \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\" (UID: \"1ec7e263-3178-47c9-934b-7e0f4d72aec7\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816704 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4fw\" (UniqueName: \"kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.816748 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb\") pod \"2286da81-a82e-4026-a39f-1c712480bc2d\" (UID: \"2286da81-a82e-4026-a39f-1c712480bc2d\") " Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.849230 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8" (OuterVolumeSpecName: "kube-api-access-k52n8") pod "1ec7e263-3178-47c9-934b-7e0f4d72aec7" (UID: "1ec7e263-3178-47c9-934b-7e0f4d72aec7"). InnerVolumeSpecName "kube-api-access-k52n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.849295 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw" (OuterVolumeSpecName: "kube-api-access-gl4fw") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "kube-api-access-gl4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.851410 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1ec7e263-3178-47c9-934b-7e0f4d72aec7" (UID: "1ec7e263-3178-47c9-934b-7e0f4d72aec7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.917659 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.919521 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k52n8\" (UniqueName: \"kubernetes.io/projected/1ec7e263-3178-47c9-934b-7e0f4d72aec7-kube-api-access-k52n8\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.919540 4896 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.919549 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4fw\" (UniqueName: \"kubernetes.io/projected/2286da81-a82e-4026-a39f-1c712480bc2d-kube-api-access-gl4fw\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.919558 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.931052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.931217 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data" (OuterVolumeSpecName: "config-data") pod "1ec7e263-3178-47c9-934b-7e0f4d72aec7" (UID: "1ec7e263-3178-47c9-934b-7e0f4d72aec7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.936450 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.945915 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.965530 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ec7e263-3178-47c9-934b-7e0f4d72aec7" (UID: "1ec7e263-3178-47c9-934b-7e0f4d72aec7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:29 crc kubenswrapper[4896]: I0126 16:00:29.993311 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config" (OuterVolumeSpecName: "config") pod "2286da81-a82e-4026-a39f-1c712480bc2d" (UID: "2286da81-a82e-4026-a39f-1c712480bc2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021419 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021459 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021474 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021489 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2286da81-a82e-4026-a39f-1c712480bc2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021500 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.021513 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec7e263-3178-47c9-934b-7e0f4d72aec7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.080727 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.103795 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-mjx6f"] Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.769075 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-mjx6f" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 26 16:00:30 crc kubenswrapper[4896]: I0126 16:00:30.777055 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" path="/var/lib/kubelet/pods/2286da81-a82e-4026-a39f-1c712480bc2d/volumes" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.100545 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:31 crc kubenswrapper[4896]: E0126 16:00:31.101044 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" containerName="glance-db-sync" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.101055 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" containerName="glance-db-sync" Jan 26 16:00:31 crc kubenswrapper[4896]: E0126 16:00:31.101073 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="init" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.101079 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="init" Jan 26 16:00:31 crc kubenswrapper[4896]: E0126 16:00:31.101099 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.101106 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.101345 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2286da81-a82e-4026-a39f-1c712480bc2d" containerName="dnsmasq-dns" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.101356 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" containerName="glance-db-sync" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.102432 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.115797 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.254430 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.254881 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.254920 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.254949 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.255031 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795d7\" (UniqueName: \"kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.255134 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356665 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356763 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356862 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356888 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356917 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.356994 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-795d7\" (UniqueName: \"kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.358304 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.358327 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.358409 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.358434 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.359101 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.398273 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-795d7\" (UniqueName: \"kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7\") pod \"dnsmasq-dns-785d8bcb8c-g7k7t\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.440841 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.729307 4896 generic.go:334] "Generic (PLEG): container finished" podID="15b95f90-b75a-43ab-9c54-acd4c3e658ab" containerID="53b715befc2ba979051d1e68afe6109d3a654efdb51b9845cb429a412105266c" exitCode=0 Jan 26 16:00:31 crc kubenswrapper[4896]: I0126 16:00:31.729374 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerDied","Data":"53b715befc2ba979051d1e68afe6109d3a654efdb51b9845cb429a412105266c"} Jan 26 16:00:32 crc kubenswrapper[4896]: E0126 16:00:32.121666 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 16:00:32 crc kubenswrapper[4896]: E0126 16:00:32.122075 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bbl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-l5784_openstack(590e8b81-a793-4143-9b0e-f2afb348dd91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:00:32 crc kubenswrapper[4896]: E0126 16:00:32.123851 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-l5784" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.166387 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.171466 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.176843 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m8j8z" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.177147 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.177338 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.192641 4896 scope.go:117] "RemoveContainer" containerID="a056e8d3200f05a106e01eba09d053cb58238e8614fae8ebc24b94d3d2d3ea5f" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.202315 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297302 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297397 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297440 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297513 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297606 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297656 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tfz5\" (UniqueName: \"kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.297824 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.317199 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-94m9x" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.398799 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config\") pod \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399052 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rv9j\" (UniqueName: \"kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j\") pod \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399071 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle\") pod \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\" (UID: \"bcb36972-ce84-471b-92b5-7be45e7e2d1a\") " Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399412 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399455 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399492 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399514 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399784 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.399822 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tfz5\" (UniqueName: \"kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.402950 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.404761 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.405744 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j" (OuterVolumeSpecName: "kube-api-access-4rv9j") pod "bcb36972-ce84-471b-92b5-7be45e7e2d1a" (UID: "bcb36972-ce84-471b-92b5-7be45e7e2d1a"). InnerVolumeSpecName "kube-api-access-4rv9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.408185 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.415066 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.415116 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6ee5c5ace645a0437237edccec1152ed0b5c152b57bef8f765a8fb7bcea3897/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.415445 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.430462 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.499492 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tfz5\" (UniqueName: \"kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.499949 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.501702 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rv9j\" (UniqueName: \"kubernetes.io/projected/bcb36972-ce84-471b-92b5-7be45e7e2d1a-kube-api-access-4rv9j\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4896]: E0126 16:00:32.507037 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb36972-ce84-471b-92b5-7be45e7e2d1a" containerName="neutron-db-sync" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.507074 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb36972-ce84-471b-92b5-7be45e7e2d1a" containerName="neutron-db-sync" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.507598 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb36972-ce84-471b-92b5-7be45e7e2d1a" containerName="neutron-db-sync" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.509216 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.526156 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.528739 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config" (OuterVolumeSpecName: "config") pod "bcb36972-ce84-471b-92b5-7be45e7e2d1a" (UID: "bcb36972-ce84-471b-92b5-7be45e7e2d1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.563203 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb36972-ce84-471b-92b5-7be45e7e2d1a" (UID: "bcb36972-ce84-471b-92b5-7be45e7e2d1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.653733 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655337 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655408 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655498 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655534 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655605 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655822 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655880 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qrn\" (UniqueName: \"kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655965 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.655990 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bcb36972-ce84-471b-92b5-7be45e7e2d1a-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.657415 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.682213 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.685236 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.707712 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.746968 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-94m9x" event={"ID":"bcb36972-ce84-471b-92b5-7be45e7e2d1a","Type":"ContainerDied","Data":"3dcb6d886bdb78f4908c9771d13353e8efb2a79b7a738597ec4254c88146c128"} Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.747010 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dcb6d886bdb78f4908c9771d13353e8efb2a79b7a738597ec4254c88146c128" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.747072 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-94m9x" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757569 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757635 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qrn\" (UniqueName: \"kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757683 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757701 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757760 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757793 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.757840 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.758344 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.758670 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: E0126 16:00:32.761351 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-l5784" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.762670 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.763126 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.765939 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.766776 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.766840 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca73410178097169f340b5a8c67a8781e7a4252415873019f27420073d85ffa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.776026 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.789427 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qrn\" (UniqueName: \"kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.874214 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfxt\" (UniqueName: \"kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.874726 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.874901 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.978838 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.982380 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl"] Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.992620 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfxt\" (UniqueName: \"kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.992710 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.992776 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.993864 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:32 crc kubenswrapper[4896]: I0126 16:00:32.994391 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.015284 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfxt\" (UniqueName: \"kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt\") pod \"certified-operators-wxhjt\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.030836 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.198237 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bhpbc"] Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.246068 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:33 crc kubenswrapper[4896]: W0126 16:00:33.505828 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8e811a_f565_4dbb_846a_1a80d4832d44.slice/crio-f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f WatchSource:0}: Error finding container f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f: Status 404 returned error can't find the container with id f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.820504 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.888659 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.890805 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.899385 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.899732 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dvx2m" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.900127 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.902270 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bhpbc" event={"ID":"0d8e811a-f565-4dbb-846a-1a80d4832d44","Type":"ContainerStarted","Data":"f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f"} Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.912556 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" event={"ID":"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb","Type":"ContainerStarted","Data":"876a086964ad158825ba757775767e9ce999e4b1a20d0b0893def5d16cbfe71c"} Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.914748 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.916100 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.932570 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.934739 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943395 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczr6\" (UniqueName: \"kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943560 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943621 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943705 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943742 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943791 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943812 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943848 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943901 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrcj\" (UniqueName: \"kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943933 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.943962 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:33 crc kubenswrapper[4896]: I0126 16:00:33.983350 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051497 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051545 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051568 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051615 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051659 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrcj\" (UniqueName: \"kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051684 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051706 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051808 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczr6\" (UniqueName: \"kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051897 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051927 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.051990 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.053636 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.054168 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.055926 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.056066 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.056738 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.061247 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.064426 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.065217 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.067967 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.110906 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrcj\" (UniqueName: \"kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj\") pod \"dnsmasq-dns-55f844cf75-jb4fq\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.120340 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczr6\" (UniqueName: \"kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6\") pod \"neutron-7748db89d4-5m4rm\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.287908 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.341707 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:34 crc kubenswrapper[4896]: I0126 16:00:34.933087 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerStarted","Data":"e034d35672709eeb3721be76d430939640c1c205c5b009243142cb445a0a02ef"} Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.033833 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.181525 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.212510 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.476648 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.529417 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.644318 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:35 crc kubenswrapper[4896]: W0126 16:00:35.717143 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode79e28c8_45cf_4fc6_a99e_59b024131415.slice/crio-36509bbbff9408afe39100d884d6455a1142e23e0ca77344a4bdc5b77458e6c6 WatchSource:0}: Error finding container 36509bbbff9408afe39100d884d6455a1142e23e0ca77344a4bdc5b77458e6c6: Status 404 returned error can't find the container with id 36509bbbff9408afe39100d884d6455a1142e23e0ca77344a4bdc5b77458e6c6 Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.983549 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rv5xj" event={"ID":"d0eef199-8f69-4f92-9435-ff0fd74dd854","Type":"ContainerStarted","Data":"08423b45173db189e8eec3cd9fdbb559bfacaa9533cf24942af9735e6ea79cc8"} Jan 26 16:00:35 crc kubenswrapper[4896]: I0126 16:00:35.999299 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rd44b" event={"ID":"9b4a2eac-2950-4747-bc43-f287adafb4e2","Type":"ContainerStarted","Data":"8cf0b42e2aafdba5acdfad74ab5208e84b9c347fb2cf1d8075479332526cbb50"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.042653 4896 generic.go:334] "Generic (PLEG): container finished" podID="8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" containerID="85468301d7e2946a5d33f0a2bbfdcd2e62fcb6abb066da3309e8786305429542" exitCode=0 Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.042807 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.043272 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" event={"ID":"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb","Type":"ContainerDied","Data":"85468301d7e2946a5d33f0a2bbfdcd2e62fcb6abb066da3309e8786305429542"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.071108 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rv5xj" podStartSLOduration=7.543876083 podStartE2EDuration="50.071060016s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="2026-01-26 15:59:49.598538028 +0000 UTC m=+1547.380418421" lastFinishedPulling="2026-01-26 16:00:32.125721961 +0000 UTC m=+1589.907602354" observedRunningTime="2026-01-26 16:00:36.003186869 +0000 UTC m=+1593.785067262" watchObservedRunningTime="2026-01-26 16:00:36.071060016 +0000 UTC m=+1593.852940409" Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.092535 4896 generic.go:334] "Generic (PLEG): container finished" podID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerID="b5a74b1646d77b75b934fbbd487ab5320b58a23d4e90ba362ddf2a697cd1128f" exitCode=0 Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.092686 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerStarted","Data":"36509bbbff9408afe39100d884d6455a1142e23e0ca77344a4bdc5b77458e6c6"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.092752 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerDied","Data":"b5a74b1646d77b75b934fbbd487ab5320b58a23d4e90ba362ddf2a697cd1128f"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.092770 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerStarted","Data":"b15a691bcf0f1a74e5617f83dc224d485c6a0397ae705d0515c5c49995b0be55"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.125777 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" event={"ID":"a5ad52ae-3537-4139-9492-ad6a9251608b","Type":"ContainerStarted","Data":"83f926fb653db82a4326fdd73e87d7e79ed8b4b109f7ad5938966b2fe8a59aa5"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.142094 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rd44b" podStartSLOduration=7.6780745459999995 podStartE2EDuration="50.14206783s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="2026-01-26 15:59:49.598152408 +0000 UTC m=+1547.380032801" lastFinishedPulling="2026-01-26 16:00:32.062145692 +0000 UTC m=+1589.844026085" observedRunningTime="2026-01-26 16:00:36.033046697 +0000 UTC m=+1593.814927090" watchObservedRunningTime="2026-01-26 16:00:36.14206783 +0000 UTC m=+1593.923948223" Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.161192 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bhpbc" event={"ID":"0d8e811a-f565-4dbb-846a-1a80d4832d44","Type":"ContainerStarted","Data":"bc0f10791d815fdaa77458041583c06938dd033c508916aeb8a1a3783634eeac"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.194024 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerStarted","Data":"728c7b3a17975c1c014677fda3abad56169eb6718fdedcaa007abe4b273a8f31"} Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.208986 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:00:36 crc kubenswrapper[4896]: I0126 16:00:36.364163 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bhpbc" podStartSLOduration=20.364139034 podStartE2EDuration="20.364139034s" podCreationTimestamp="2026-01-26 16:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:36.213145335 +0000 UTC m=+1593.995025798" watchObservedRunningTime="2026-01-26 16:00:36.364139034 +0000 UTC m=+1594.146019427" Jan 26 16:00:37 crc kubenswrapper[4896]: E0126 16:00:37.188034 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89998198_666d_4e9c_9213_8dd8fdcdd0d9.slice/crio-conmon-72ac40e8fd004b566ef52984493eb2acc326d3c1b0b5fd9b2dd57ecccf830993.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89998198_666d_4e9c_9213_8dd8fdcdd0d9.slice/crio-72ac40e8fd004b566ef52984493eb2acc326d3c1b0b5fd9b2dd57ecccf830993.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.232515 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerStarted","Data":"7d22bf68b6d9f2f702185d0edc40b21d55ac24b50a4ed6bd26bac70cf984f96b"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.235951 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerStarted","Data":"13038f4b1fc861b0fb52f7a8a7fbca95f80bc8c3a2579a9f7d180e5953a06057"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.236002 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerStarted","Data":"e8c4460bcbab41706d5fbe10ba6a5409ce8a4625c46ca752ae53ba7b1368add0"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.251345 4896 generic.go:334] "Generic (PLEG): container finished" podID="a5ad52ae-3537-4139-9492-ad6a9251608b" containerID="768687681df22da6ac0dfc0def622876e71a36e529281eeb6c0918e669bc50cc" exitCode=0 Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.251440 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" event={"ID":"a5ad52ae-3537-4139-9492-ad6a9251608b","Type":"ContainerDied","Data":"768687681df22da6ac0dfc0def622876e71a36e529281eeb6c0918e669bc50cc"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.301636 4896 generic.go:334] "Generic (PLEG): container finished" podID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerID="72ac40e8fd004b566ef52984493eb2acc326d3c1b0b5fd9b2dd57ecccf830993" exitCode=0 Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.302353 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" event={"ID":"89998198-666d-4e9c-9213-8dd8fdcdd0d9","Type":"ContainerDied","Data":"72ac40e8fd004b566ef52984493eb2acc326d3c1b0b5fd9b2dd57ecccf830993"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.302411 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" event={"ID":"89998198-666d-4e9c-9213-8dd8fdcdd0d9","Type":"ContainerStarted","Data":"4bda0e115f21eafe5bd1b2c706a7027cd8e8486ad52410e43467cc2b27f2eed7"} Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.554429 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.674441 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.674897 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.674933 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-795d7\" (UniqueName: \"kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.674983 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.675042 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.675137 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc\") pod \"a5ad52ae-3537-4139-9492-ad6a9251608b\" (UID: \"a5ad52ae-3537-4139-9492-ad6a9251608b\") " Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.695335 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7" (OuterVolumeSpecName: "kube-api-access-795d7") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "kube-api-access-795d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.761806 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.808375 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.808438 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-795d7\" (UniqueName: \"kubernetes.io/projected/a5ad52ae-3537-4139-9492-ad6a9251608b-kube-api-access-795d7\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.824360 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.825161 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.829039 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config" (OuterVolumeSpecName: "config") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.838084 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5ad52ae-3537-4139-9492-ad6a9251608b" (UID: "a5ad52ae-3537-4139-9492-ad6a9251608b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.911114 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.911450 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.911465 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.911476 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5ad52ae-3537-4139-9492-ad6a9251608b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:37 crc kubenswrapper[4896]: I0126 16:00:37.944660 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.007672 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:00:38 crc kubenswrapper[4896]: E0126 16:00:38.008138 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" containerName="collect-profiles" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.008155 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" containerName="collect-profiles" Jan 26 16:00:38 crc kubenswrapper[4896]: E0126 16:00:38.008192 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ad52ae-3537-4139-9492-ad6a9251608b" containerName="init" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.008198 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ad52ae-3537-4139-9492-ad6a9251608b" containerName="init" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.008417 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" containerName="collect-profiles" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.008433 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ad52ae-3537-4139-9492-ad6a9251608b" containerName="init" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.009558 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.013176 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume\") pod \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.013374 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4zr2\" (UniqueName: \"kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2\") pod \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.013537 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume\") pod \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\" (UID: \"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb\") " Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.015044 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" (UID: "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.019338 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.024515 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.055005 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" (UID: "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.081824 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2" (OuterVolumeSpecName: "kube-api-access-f4zr2") pod "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" (UID: "8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb"). InnerVolumeSpecName "kube-api-access-f4zr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.098085 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.117068 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.117127 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd78w\" (UniqueName: \"kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.117216 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.117276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.117327 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.118141 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.118397 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.118490 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.118508 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4zr2\" (UniqueName: \"kubernetes.io/projected/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-kube-api-access-f4zr2\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.118522 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220341 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220388 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220414 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd78w\" (UniqueName: \"kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220454 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220488 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220517 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.220619 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.232463 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.236778 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.237969 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.244879 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.245984 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.246612 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.256144 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd78w\" (UniqueName: \"kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w\") pod \"neutron-5f5b95478f-8qxzd\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.322775 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerStarted","Data":"35aa45c2d66b894b0bef3de9cd3a7db9024f870a2fdb376e9741d46a5b0bccc3"} Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.325444 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.325634 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl" event={"ID":"8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb","Type":"ContainerDied","Data":"876a086964ad158825ba757775767e9ce999e4b1a20d0b0893def5d16cbfe71c"} Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.325744 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876a086964ad158825ba757775767e9ce999e4b1a20d0b0893def5d16cbfe71c" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.339084 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerStarted","Data":"b23c59ab68fc957f3fe183a19e9c191a646596732cf0a1d6e8de4083cc7a4c8a"} Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.364294 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerStarted","Data":"98f0b276d69dd247dadc68513e3b0a32c265aaba50665acfcc9ddd1e793452fd"} Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.364623 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.375121 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" event={"ID":"a5ad52ae-3537-4139-9492-ad6a9251608b","Type":"ContainerDied","Data":"83f926fb653db82a4326fdd73e87d7e79ed8b4b109f7ad5938966b2fe8a59aa5"} Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.375183 4896 scope.go:117] "RemoveContainer" containerID="768687681df22da6ac0dfc0def622876e71a36e529281eeb6c0918e669bc50cc" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.375377 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g7k7t" Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.544753 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.555863 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g7k7t"] Jan 26 16:00:38 crc kubenswrapper[4896]: I0126 16:00:38.796572 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ad52ae-3537-4139-9492-ad6a9251608b" path="/var/lib/kubelet/pods/a5ad52ae-3537-4139-9492-ad6a9251608b/volumes" Jan 26 16:00:39 crc kubenswrapper[4896]: I0126 16:00:39.481336 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.437095 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" event={"ID":"89998198-666d-4e9c-9213-8dd8fdcdd0d9","Type":"ContainerStarted","Data":"04db21ccbca74daecbec9d15e26110f8d4a3b9f7d53b5587c897d1a92e5626d8"} Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.437804 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.437823 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.442943 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7748db89d4-5m4rm" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.475223 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7748db89d4-5m4rm" podStartSLOduration=10.475201428 podStartE2EDuration="10.475201428s" podCreationTimestamp="2026-01-26 16:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:43.459314796 +0000 UTC m=+1601.241195209" watchObservedRunningTime="2026-01-26 16:00:43.475201428 +0000 UTC m=+1601.257081841" Jan 26 16:00:43 crc kubenswrapper[4896]: I0126 16:00:43.499346 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" podStartSLOduration=10.499327474 podStartE2EDuration="10.499327474s" podCreationTimestamp="2026-01-26 16:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:43.489225174 +0000 UTC m=+1601.271105577" watchObservedRunningTime="2026-01-26 16:00:43.499327474 +0000 UTC m=+1601.281207867" Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.451432 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerStarted","Data":"49095e0e8aef89c3e8881fbd43394fbf067f8abc795c52f41881f52df9ff18a2"} Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.453555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerStarted","Data":"daa548ad23ffda5fca96367b0d48b35b81dcb09412b48deddbfe1bbfa4123f83"} Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.453794 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-log" containerID="cri-o://35aa45c2d66b894b0bef3de9cd3a7db9024f870a2fdb376e9741d46a5b0bccc3" gracePeriod=30 Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.454499 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-httpd" containerID="cri-o://daa548ad23ffda5fca96367b0d48b35b81dcb09412b48deddbfe1bbfa4123f83" gracePeriod=30 Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.457382 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7748db89d4-5m4rm" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 16:00:44 crc kubenswrapper[4896]: I0126 16:00:44.494327 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.494301968 podStartE2EDuration="13.494301968s" podCreationTimestamp="2026-01-26 16:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:44.478027936 +0000 UTC m=+1602.259908329" watchObservedRunningTime="2026-01-26 16:00:44.494301968 +0000 UTC m=+1602.276182361" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.486291 4896 generic.go:334] "Generic (PLEG): container finished" podID="0d8e811a-f565-4dbb-846a-1a80d4832d44" containerID="bc0f10791d815fdaa77458041583c06938dd033c508916aeb8a1a3783634eeac" exitCode=0 Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.486703 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bhpbc" event={"ID":"0d8e811a-f565-4dbb-846a-1a80d4832d44","Type":"ContainerDied","Data":"bc0f10791d815fdaa77458041583c06938dd033c508916aeb8a1a3783634eeac"} Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.492876 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerStarted","Data":"99232daf1b3a68807a3e4b38996a447324af29bb073115af51d3472a4d8255c7"} Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.495925 4896 generic.go:334] "Generic (PLEG): container finished" podID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerID="daa548ad23ffda5fca96367b0d48b35b81dcb09412b48deddbfe1bbfa4123f83" exitCode=0 Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.495944 4896 generic.go:334] "Generic (PLEG): container finished" podID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerID="35aa45c2d66b894b0bef3de9cd3a7db9024f870a2fdb376e9741d46a5b0bccc3" exitCode=143 Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.495961 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerDied","Data":"daa548ad23ffda5fca96367b0d48b35b81dcb09412b48deddbfe1bbfa4123f83"} Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.495979 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerDied","Data":"35aa45c2d66b894b0bef3de9cd3a7db9024f870a2fdb376e9741d46a5b0bccc3"} Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.616972 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.619426 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.644069 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.708157 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.708278 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j775b\" (UniqueName: \"kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.708478 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.810461 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.810563 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.810662 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j775b\" (UniqueName: \"kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.811142 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.811160 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.847052 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j775b\" (UniqueName: \"kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b\") pod \"redhat-marketplace-nwtqn\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:45 crc kubenswrapper[4896]: I0126 16:00:45.956940 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:00:46 crc kubenswrapper[4896]: I0126 16:00:46.507788 4896 generic.go:334] "Generic (PLEG): container finished" podID="9b4a2eac-2950-4747-bc43-f287adafb4e2" containerID="8cf0b42e2aafdba5acdfad74ab5208e84b9c347fb2cf1d8075479332526cbb50" exitCode=0 Jan 26 16:00:46 crc kubenswrapper[4896]: I0126 16:00:46.507820 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rd44b" event={"ID":"9b4a2eac-2950-4747-bc43-f287adafb4e2","Type":"ContainerDied","Data":"8cf0b42e2aafdba5acdfad74ab5208e84b9c347fb2cf1d8075479332526cbb50"} Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.265264 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.357774 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.358110 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.358188 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.358237 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.358342 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.358391 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbr9f\" (UniqueName: \"kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f\") pod \"0d8e811a-f565-4dbb-846a-1a80d4832d44\" (UID: \"0d8e811a-f565-4dbb-846a-1a80d4832d44\") " Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.368756 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.368828 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts" (OuterVolumeSpecName: "scripts") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.368842 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.376909 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f" (OuterVolumeSpecName: "kube-api-access-xbr9f") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "kube-api-access-xbr9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.463601 4896 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.463940 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbr9f\" (UniqueName: \"kubernetes.io/projected/0d8e811a-f565-4dbb-846a-1a80d4832d44-kube-api-access-xbr9f\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.464123 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.464139 4896 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.486895 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data" (OuterVolumeSpecName: "config-data") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.487003 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d8e811a-f565-4dbb-846a-1a80d4832d44" (UID: "0d8e811a-f565-4dbb-846a-1a80d4832d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.601952 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bhpbc" event={"ID":"0d8e811a-f565-4dbb-846a-1a80d4832d44","Type":"ContainerDied","Data":"f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f"} Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.602018 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b796b46f7f5ff14101ad4faec05be1476780de28b4784758a0b2edc7e5aa7f" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.602125 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bhpbc" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.611065 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.611102 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e811a-f565-4dbb-846a-1a80d4832d44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.623611 4896 generic.go:334] "Generic (PLEG): container finished" podID="d0eef199-8f69-4f92-9435-ff0fd74dd854" containerID="08423b45173db189e8eec3cd9fdbb559bfacaa9533cf24942af9735e6ea79cc8" exitCode=0 Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.624041 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rv5xj" event={"ID":"d0eef199-8f69-4f92-9435-ff0fd74dd854","Type":"ContainerDied","Data":"08423b45173db189e8eec3cd9fdbb559bfacaa9533cf24942af9735e6ea79cc8"} Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.780713 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69569b65bc-qdnx9"] Jan 26 16:00:47 crc kubenswrapper[4896]: E0126 16:00:47.781755 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8e811a-f565-4dbb-846a-1a80d4832d44" containerName="keystone-bootstrap" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.781784 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8e811a-f565-4dbb-846a-1a80d4832d44" containerName="keystone-bootstrap" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.782068 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8e811a-f565-4dbb-846a-1a80d4832d44" containerName="keystone-bootstrap" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.783059 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.826666 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.828504 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.828590 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.828824 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.831282 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fkw6" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.831370 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.846051 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69569b65bc-qdnx9"] Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.902835 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.931890 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-combined-ca-bundle\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.931995 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-config-data\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932040 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-fernet-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932064 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-internal-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932083 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-public-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932198 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-scripts\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpg5t\" (UniqueName: \"kubernetes.io/projected/75348c37-fb63-49c8-95d3-b666eb3d1086-kube-api-access-gpg5t\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:47 crc kubenswrapper[4896]: I0126 16:00:47.932284 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-credential-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.034990 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035087 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035272 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035309 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035394 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qrn\" (UniqueName: \"kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035414 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.035699 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data\") pod \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\" (UID: \"6469d9d7-8c97-41fb-98c3-825fd3956ee7\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036028 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-credential-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036187 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-combined-ca-bundle\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036226 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-config-data\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036267 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-fernet-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036292 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-internal-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036314 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-public-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036423 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-scripts\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.036497 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpg5t\" (UniqueName: \"kubernetes.io/projected/75348c37-fb63-49c8-95d3-b666eb3d1086-kube-api-access-gpg5t\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.037519 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.037824 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs" (OuterVolumeSpecName: "logs") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.053447 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-scripts\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.053736 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-public-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.053784 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn" (OuterVolumeSpecName: "kube-api-access-28qrn") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "kube-api-access-28qrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.057258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-internal-tls-certs\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.061444 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-credential-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.063354 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-config-data\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.070208 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts" (OuterVolumeSpecName: "scripts") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.080656 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-combined-ca-bundle\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.081544 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/75348c37-fb63-49c8-95d3-b666eb3d1086-fernet-keys\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.114756 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea" (OuterVolumeSpecName: "glance") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.134171 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpg5t\" (UniqueName: \"kubernetes.io/projected/75348c37-fb63-49c8-95d3-b666eb3d1086-kube-api-access-gpg5t\") pod \"keystone-69569b65bc-qdnx9\" (UID: \"75348c37-fb63-49c8-95d3-b666eb3d1086\") " pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.139975 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.140002 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6469d9d7-8c97-41fb-98c3-825fd3956ee7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.140039 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") on node \"crc\" " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.140050 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qrn\" (UniqueName: \"kubernetes.io/projected/6469d9d7-8c97-41fb-98c3-825fd3956ee7-kube-api-access-28qrn\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.140060 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.171634 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.197000 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.198782 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea") on node "crc" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.221313 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.242220 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.242268 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.247500 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:00:48 crc kubenswrapper[4896]: W0126 16:00:48.297135 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4450b82_6d66_4109_8fec_6b979256d032.slice/crio-e02be973a61d9a0f2b82f3495860278160d9a4682daf7eea8d3c2353489a0fe0 WatchSource:0}: Error finding container e02be973a61d9a0f2b82f3495860278160d9a4682daf7eea8d3c2353489a0fe0: Status 404 returned error can't find the container with id e02be973a61d9a0f2b82f3495860278160d9a4682daf7eea8d3c2353489a0fe0 Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.357936 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data" (OuterVolumeSpecName: "config-data") pod "6469d9d7-8c97-41fb-98c3-825fd3956ee7" (UID: "6469d9d7-8c97-41fb-98c3-825fd3956ee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.451357 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6469d9d7-8c97-41fb-98c3-825fd3956ee7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.516272 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rd44b" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.648904 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerStarted","Data":"e02be973a61d9a0f2b82f3495860278160d9a4682daf7eea8d3c2353489a0fe0"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.654633 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs\") pod \"9b4a2eac-2950-4747-bc43-f287adafb4e2\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.655162 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs" (OuterVolumeSpecName: "logs") pod "9b4a2eac-2950-4747-bc43-f287adafb4e2" (UID: "9b4a2eac-2950-4747-bc43-f287adafb4e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.654827 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts\") pod \"9b4a2eac-2950-4747-bc43-f287adafb4e2\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.656351 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn62w\" (UniqueName: \"kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w\") pod \"9b4a2eac-2950-4747-bc43-f287adafb4e2\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.656653 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data\") pod \"9b4a2eac-2950-4747-bc43-f287adafb4e2\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.656683 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle\") pod \"9b4a2eac-2950-4747-bc43-f287adafb4e2\" (UID: \"9b4a2eac-2950-4747-bc43-f287adafb4e2\") " Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.658678 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4a2eac-2950-4747-bc43-f287adafb4e2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.660670 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6469d9d7-8c97-41fb-98c3-825fd3956ee7","Type":"ContainerDied","Data":"728c7b3a17975c1c014677fda3abad56169eb6718fdedcaa007abe4b273a8f31"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.660725 4896 scope.go:117] "RemoveContainer" containerID="daa548ad23ffda5fca96367b0d48b35b81dcb09412b48deddbfe1bbfa4123f83" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.660912 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.726226 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w" (OuterVolumeSpecName: "kube-api-access-dn62w") pod "9b4a2eac-2950-4747-bc43-f287adafb4e2" (UID: "9b4a2eac-2950-4747-bc43-f287adafb4e2"). InnerVolumeSpecName "kube-api-access-dn62w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.731478 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts" (OuterVolumeSpecName: "scripts") pod "9b4a2eac-2950-4747-bc43-f287adafb4e2" (UID: "9b4a2eac-2950-4747-bc43-f287adafb4e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.738649 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"15b95f90-b75a-43ab-9c54-acd4c3e658ab","Type":"ContainerStarted","Data":"4c21d431ae5bff19ae531d6cd5f2536b413250de880de868b7fa91060f753fc5"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.762627 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn62w\" (UniqueName: \"kubernetes.io/projected/9b4a2eac-2950-4747-bc43-f287adafb4e2-kube-api-access-dn62w\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.767805 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rd44b" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.768005 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.768014 4896 scope.go:117] "RemoveContainer" containerID="35aa45c2d66b894b0bef3de9cd3a7db9024f870a2fdb376e9741d46a5b0bccc3" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.792260 4896 generic.go:334] "Generic (PLEG): container finished" podID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerID="ba08707ddb693337e21a4104a25eb7861d014e9eac171131f75873a407d3cecd" exitCode=0 Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.793808 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.793848 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rd44b" event={"ID":"9b4a2eac-2950-4747-bc43-f287adafb4e2","Type":"ContainerDied","Data":"1f19e71ee7a05bf84cda1c325e7400495cfe2c7d9411e1a2b172b1eaea11423e"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.793874 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f19e71ee7a05bf84cda1c325e7400495cfe2c7d9411e1a2b172b1eaea11423e" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.793891 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerDied","Data":"ba08707ddb693337e21a4104a25eb7861d014e9eac171131f75873a407d3cecd"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.794679 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77d5697764-wvv6n"] Jan 26 16:00:48 crc kubenswrapper[4896]: E0126 16:00:48.795229 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-log" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795250 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-log" Jan 26 16:00:48 crc kubenswrapper[4896]: E0126 16:00:48.795271 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4a2eac-2950-4747-bc43-f287adafb4e2" containerName="placement-db-sync" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795279 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4a2eac-2950-4747-bc43-f287adafb4e2" containerName="placement-db-sync" Jan 26 16:00:48 crc kubenswrapper[4896]: E0126 16:00:48.795291 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-httpd" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795306 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-httpd" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795537 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4a2eac-2950-4747-bc43-f287adafb4e2" containerName="placement-db-sync" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795560 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-log" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.795590 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" containerName="glance-httpd" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.801378 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.811970 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.812083 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerStarted","Data":"f05b41cc8dd83a60275733bd2dc2acd106d94635dfff7728a8604bf74efa0d4c"} Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.815024 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.841762 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.885357 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d5697764-wvv6n"] Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.961453 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.975129 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989406 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989464 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-config-data\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989645 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-internal-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989779 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-public-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-scripts\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.989978 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zp5t\" (UniqueName: \"kubernetes.io/projected/6133f02e-6901-41cd-ac62-9450747a6d98-kube-api-access-8zp5t\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.990030 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.990058 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6133f02e-6901-41cd-ac62-9450747a6d98-logs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:48 crc kubenswrapper[4896]: I0126 16:00:48.990219 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-combined-ca-bundle\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.008099 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.024335 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=70.024315363 podStartE2EDuration="1m10.024315363s" podCreationTimestamp="2026-01-26 15:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:48.819838663 +0000 UTC m=+1606.601719076" watchObservedRunningTime="2026-01-26 16:00:49.024315363 +0000 UTC m=+1606.806195756" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110067 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt99g\" (UniqueName: \"kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110168 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-internal-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110231 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-public-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110327 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110412 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-scripts\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110451 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zp5t\" (UniqueName: \"kubernetes.io/projected/6133f02e-6901-41cd-ac62-9450747a6d98-kube-api-access-8zp5t\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110506 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110547 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6133f02e-6901-41cd-ac62-9450747a6d98-logs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110615 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110684 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110709 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-combined-ca-bundle\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110836 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-config-data\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.110905 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.112137 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6133f02e-6901-41cd-ac62-9450747a6d98-logs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.123996 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-scripts\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.125000 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-config-data\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.166516 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-combined-ca-bundle\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.167258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-public-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.169362 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6133f02e-6901-41cd-ac62-9450747a6d98-internal-tls-certs\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226538 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226643 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226744 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226798 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226854 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.226951 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.227009 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt99g\" (UniqueName: \"kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.227049 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.229152 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.231463 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zp5t\" (UniqueName: \"kubernetes.io/projected/6133f02e-6901-41cd-ac62-9450747a6d98-kube-api-access-8zp5t\") pod \"placement-77d5697764-wvv6n\" (UID: \"6133f02e-6901-41cd-ac62-9450747a6d98\") " pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.267183 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69569b65bc-qdnx9"] Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.281755 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.287414 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.287460 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca73410178097169f340b5a8c67a8781e7a4252415873019f27420073d85ffa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.346670 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.349939 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.350334 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.353496 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.381371 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt99g\" (UniqueName: \"kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.389542 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.450242 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.569347 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data" (OuterVolumeSpecName: "config-data") pod "9b4a2eac-2950-4747-bc43-f287adafb4e2" (UID: "9b4a2eac-2950-4747-bc43-f287adafb4e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.587249 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.588037 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.588308 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="dnsmasq-dns" containerID="cri-o://6229ba8dd1754a451776ad34befa5547df990edc017daacb5d87369e1b1f31fa" gracePeriod=10 Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.627281 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.627411 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.707653 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b4a2eac-2950-4747-bc43-f287adafb4e2" (UID: "9b4a2eac-2950-4747-bc43-f287adafb4e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.729397 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4a2eac-2950-4747-bc43-f287adafb4e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.955915 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerStarted","Data":"ff055f2de20d26f9dd6a6bb904ecd7f239ecdbccdfe8232ff499b9b94da38b83"} Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.956597 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-log" containerID="cri-o://b23c59ab68fc957f3fe183a19e9c191a646596732cf0a1d6e8de4083cc7a4c8a" gracePeriod=30 Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.957237 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-httpd" containerID="cri-o://ff055f2de20d26f9dd6a6bb904ecd7f239ecdbccdfe8232ff499b9b94da38b83" gracePeriod=30 Jan 26 16:00:49 crc kubenswrapper[4896]: I0126 16:00:49.990345 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerStarted","Data":"4d4af3a94d447111c7c6c7fc45bdc5fddb1660c1ccb8ad22497fe272ff4714a3"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.000260 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.000239267 podStartE2EDuration="19.000239267s" podCreationTimestamp="2026-01-26 16:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:49.994962688 +0000 UTC m=+1607.776843101" watchObservedRunningTime="2026-01-26 16:00:50.000239267 +0000 UTC m=+1607.782119660" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.030783 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69569b65bc-qdnx9" event={"ID":"75348c37-fb63-49c8-95d3-b666eb3d1086","Type":"ContainerStarted","Data":"7cd4a4271efd601ea9df9649c8b96ec3e8d91ed274230c6619c9cf98b7938499"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.098112 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7pf4j" event={"ID":"c8f4e140-bab4-479c-a97b-4a5aa49a47d3","Type":"ContainerStarted","Data":"2062042a5e34295a16e9261ee5602c003cb198f6c68330944d0b5b1e061b11f9"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.174215 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-7pf4j" podStartSLOduration=5.01974893 podStartE2EDuration="1m4.174191414s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="2026-01-26 15:59:48.200781655 +0000 UTC m=+1545.982662048" lastFinishedPulling="2026-01-26 16:00:47.355224139 +0000 UTC m=+1605.137104532" observedRunningTime="2026-01-26 16:00:50.156427655 +0000 UTC m=+1607.938308058" watchObservedRunningTime="2026-01-26 16:00:50.174191414 +0000 UTC m=+1607.956071807" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.221408 4896 generic.go:334] "Generic (PLEG): container finished" podID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerID="6229ba8dd1754a451776ad34befa5547df990edc017daacb5d87369e1b1f31fa" exitCode=0 Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.221552 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" event={"ID":"8f8cfa23-5804-4f61-815b-287e23958ff9","Type":"ContainerDied","Data":"6229ba8dd1754a451776ad34befa5547df990edc017daacb5d87369e1b1f31fa"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.235687 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerStarted","Data":"4e264dbcad01037f9351410055d4d3e148e3a2943c48af41afb8d55c4686ac85"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.241724 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.244221 4896 generic.go:334] "Generic (PLEG): container finished" podID="d4450b82-6d66-4109-8fec-6b979256d032" containerID="55c633f65061a1c9661f12ac7b71f3b143ceb77e6cfd314e88254ba4e1b3adf9" exitCode=0 Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.246354 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerDied","Data":"55c633f65061a1c9661f12ac7b71f3b143ceb77e6cfd314e88254ba4e1b3adf9"} Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.284303 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.304969 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f5b95478f-8qxzd" podStartSLOduration=13.304943363 podStartE2EDuration="13.304943363s" podCreationTimestamp="2026-01-26 16:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:50.291932392 +0000 UTC m=+1608.073812785" watchObservedRunningTime="2026-01-26 16:00:50.304943363 +0000 UTC m=+1608.086823756" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.617431 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rv5xj" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.646485 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data\") pod \"d0eef199-8f69-4f92-9435-ff0fd74dd854\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.646720 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle\") pod \"d0eef199-8f69-4f92-9435-ff0fd74dd854\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.646802 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5khhf\" (UniqueName: \"kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf\") pod \"d0eef199-8f69-4f92-9435-ff0fd74dd854\" (UID: \"d0eef199-8f69-4f92-9435-ff0fd74dd854\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.657139 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0eef199-8f69-4f92-9435-ff0fd74dd854" (UID: "d0eef199-8f69-4f92-9435-ff0fd74dd854"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.657370 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf" (OuterVolumeSpecName: "kube-api-access-5khhf") pod "d0eef199-8f69-4f92-9435-ff0fd74dd854" (UID: "d0eef199-8f69-4f92-9435-ff0fd74dd854"). InnerVolumeSpecName "kube-api-access-5khhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.716624 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0eef199-8f69-4f92-9435-ff0fd74dd854" (UID: "d0eef199-8f69-4f92-9435-ff0fd74dd854"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.750674 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.750715 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5khhf\" (UniqueName: \"kubernetes.io/projected/d0eef199-8f69-4f92-9435-ff0fd74dd854-kube-api-access-5khhf\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.750729 4896 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0eef199-8f69-4f92-9435-ff0fd74dd854-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.786300 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6469d9d7-8c97-41fb-98c3-825fd3956ee7" path="/var/lib/kubelet/pods/6469d9d7-8c97-41fb-98c3-825fd3956ee7/volumes" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.795777 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d5697764-wvv6n"] Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.946965 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955258 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955376 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955410 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955475 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955504 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx7x9\" (UniqueName: \"kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.955632 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:50 crc kubenswrapper[4896]: I0126 16:00:50.969237 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9" (OuterVolumeSpecName: "kube-api-access-nx7x9") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "kube-api-access-nx7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.066109 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx7x9\" (UniqueName: \"kubernetes.io/projected/8f8cfa23-5804-4f61-815b-287e23958ff9-kube-api-access-nx7x9\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.157535 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.157869 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config" (OuterVolumeSpecName: "config") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.167569 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.169132 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") pod \"8f8cfa23-5804-4f61-815b-287e23958ff9\" (UID: \"8f8cfa23-5804-4f61-815b-287e23958ff9\") " Jan 26 16:00:51 crc kubenswrapper[4896]: W0126 16:00:51.170061 4896 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f8cfa23-5804-4f61-815b-287e23958ff9/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.170089 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.170460 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.170490 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.170506 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.199757 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.202383 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f8cfa23-5804-4f61-815b-287e23958ff9" (UID: "8f8cfa23-5804-4f61-815b-287e23958ff9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.221355 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.276061 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.276094 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f8cfa23-5804-4f61-815b-287e23958ff9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.314009 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69569b65bc-qdnx9" event={"ID":"75348c37-fb63-49c8-95d3-b666eb3d1086","Type":"ContainerStarted","Data":"2eba08e8572ad264a01bc74443753b290134fa6a56a0f4e97b281d8c8f2a16f4"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.314119 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.319801 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerStarted","Data":"656d0795cbf34ec1608f2439afa8694a9964f65299cbf8ab4f9587163e6cbdd5"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.376238 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69569b65bc-qdnx9" podStartSLOduration=4.376213134 podStartE2EDuration="4.376213134s" podCreationTimestamp="2026-01-26 16:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:51.374746377 +0000 UTC m=+1609.156626780" watchObservedRunningTime="2026-01-26 16:00:51.376213134 +0000 UTC m=+1609.158093527" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.378654 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rv5xj" event={"ID":"d0eef199-8f69-4f92-9435-ff0fd74dd854","Type":"ContainerDied","Data":"83e91d164b07f80f9ebbffb67a2acea7eb76df60793d1b8e2638c2747f7e6366"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.378702 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e91d164b07f80f9ebbffb67a2acea7eb76df60793d1b8e2638c2747f7e6366" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.383479 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rv5xj" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.408623 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" event={"ID":"8f8cfa23-5804-4f61-815b-287e23958ff9","Type":"ContainerDied","Data":"49b298b6a8226c6ee810934454585deee549f098966a2347c7b6e1367e2baa6e"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.408695 4896 scope.go:117] "RemoveContainer" containerID="6229ba8dd1754a451776ad34befa5547df990edc017daacb5d87369e1b1f31fa" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.409010 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-z5rts" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.428672 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerStarted","Data":"6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.448263 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l5784" event={"ID":"590e8b81-a793-4143-9b0e-f2afb348dd91","Type":"ContainerStarted","Data":"aeb3ca13e42994f58c03408cbfac03b951d5ff3efa906e5ea149f45402f5efd8"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.452938 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5697764-wvv6n" event={"ID":"6133f02e-6901-41cd-ac62-9450747a6d98","Type":"ContainerStarted","Data":"68d5e427fa013f38dc0083f9177871d8f7b2f456f8fdae6fec0db5a00cc12c3b"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.475315 4896 generic.go:334] "Generic (PLEG): container finished" podID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerID="ff055f2de20d26f9dd6a6bb904ecd7f239ecdbccdfe8232ff499b9b94da38b83" exitCode=143 Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.475353 4896 generic.go:334] "Generic (PLEG): container finished" podID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerID="b23c59ab68fc957f3fe183a19e9c191a646596732cf0a1d6e8de4083cc7a4c8a" exitCode=143 Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.476385 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerDied","Data":"ff055f2de20d26f9dd6a6bb904ecd7f239ecdbccdfe8232ff499b9b94da38b83"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.476415 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerDied","Data":"b23c59ab68fc957f3fe183a19e9c191a646596732cf0a1d6e8de4083cc7a4c8a"} Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.504019 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxhjt" podStartSLOduration=6.097873125 podStartE2EDuration="19.503989058s" podCreationTimestamp="2026-01-26 16:00:32 +0000 UTC" firstStartedPulling="2026-01-26 16:00:36.109092485 +0000 UTC m=+1593.890972878" lastFinishedPulling="2026-01-26 16:00:49.515208418 +0000 UTC m=+1607.297088811" observedRunningTime="2026-01-26 16:00:51.468317407 +0000 UTC m=+1609.250197810" watchObservedRunningTime="2026-01-26 16:00:51.503989058 +0000 UTC m=+1609.285869461" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.534436 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-l5784" podStartSLOduration=6.809679609 podStartE2EDuration="1m5.53441242s" podCreationTimestamp="2026-01-26 15:59:46 +0000 UTC" firstStartedPulling="2026-01-26 15:59:49.14234419 +0000 UTC m=+1546.924224583" lastFinishedPulling="2026-01-26 16:00:47.867077001 +0000 UTC m=+1605.648957394" observedRunningTime="2026-01-26 16:00:51.501767003 +0000 UTC m=+1609.283647406" watchObservedRunningTime="2026-01-26 16:00:51.53441242 +0000 UTC m=+1609.316292813" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.563139 4896 scope.go:117] "RemoveContainer" containerID="c0b0a61f86a45917521f6171b4209be6939dd5cefa4287de07b35fbbd2fe8686" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.579853 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.617949 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-z5rts"] Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.882535 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-bb7d4b5f9-6jmz5"] Jan 26 16:00:51 crc kubenswrapper[4896]: E0126 16:00:51.883645 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="init" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.883660 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="init" Jan 26 16:00:51 crc kubenswrapper[4896]: E0126 16:00:51.883682 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="dnsmasq-dns" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.883688 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="dnsmasq-dns" Jan 26 16:00:51 crc kubenswrapper[4896]: E0126 16:00:51.883703 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0eef199-8f69-4f92-9435-ff0fd74dd854" containerName="barbican-db-sync" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.883709 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0eef199-8f69-4f92-9435-ff0fd74dd854" containerName="barbican-db-sync" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.883910 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0eef199-8f69-4f92-9435-ff0fd74dd854" containerName="barbican-db-sync" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.883935 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" containerName="dnsmasq-dns" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.885193 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.901943 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.902002 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.902195 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5z65h" Jan 26 16:00:51 crc kubenswrapper[4896]: I0126 16:00:51.935092 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bb7d4b5f9-6jmz5"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.034154 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab5e517-a751-433e-9503-db39609aa439-logs\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.034195 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.034330 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-combined-ca-bundle\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.034352 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2h5\" (UniqueName: \"kubernetes.io/projected/4ab5e517-a751-433e-9503-db39609aa439-kube-api-access-7n2h5\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.034384 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data-custom\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.035846 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-566d4946fd-fbmrv"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.037622 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.043432 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.072063 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.073914 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.108867 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-566d4946fd-fbmrv"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144377 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-combined-ca-bundle\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144413 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2h5\" (UniqueName: \"kubernetes.io/projected/4ab5e517-a751-433e-9503-db39609aa439-kube-api-access-7n2h5\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144461 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data-custom\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144546 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab5e517-a751-433e-9503-db39609aa439-logs\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144565 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144616 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/640faf58-91b8-46d1-9956-60383f61abc2-logs\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144675 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data-custom\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144699 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p9dk\" (UniqueName: \"kubernetes.io/projected/640faf58-91b8-46d1-9956-60383f61abc2-kube-api-access-4p9dk\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.144792 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-combined-ca-bundle\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.174630 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.183136 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab5e517-a751-433e-9503-db39609aa439-logs\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.185400 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data-custom\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.211467 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2h5\" (UniqueName: \"kubernetes.io/projected/4ab5e517-a751-433e-9503-db39609aa439-kube-api-access-7n2h5\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.216267 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-combined-ca-bundle\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.229594 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ab5e517-a751-433e-9503-db39609aa439-config-data\") pod \"barbican-worker-bb7d4b5f9-6jmz5\" (UID: \"4ab5e517-a751-433e-9503-db39609aa439\") " pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257071 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p9dk\" (UniqueName: \"kubernetes.io/projected/640faf58-91b8-46d1-9956-60383f61abc2-kube-api-access-4p9dk\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257408 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257463 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257509 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-combined-ca-bundle\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257666 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257736 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257873 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/640faf58-91b8-46d1-9956-60383f61abc2-logs\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257911 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqwp\" (UniqueName: \"kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257968 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data-custom\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.257996 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.258027 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.261637 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/640faf58-91b8-46d1-9956-60383f61abc2-logs\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.290219 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.294255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-config-data-custom\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.322153 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.329934 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/640faf58-91b8-46d1-9956-60383f61abc2-combined-ca-bundle\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.340726 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p9dk\" (UniqueName: \"kubernetes.io/projected/640faf58-91b8-46d1-9956-60383f61abc2-kube-api-access-4p9dk\") pod \"barbican-keystone-listener-566d4946fd-fbmrv\" (UID: \"640faf58-91b8-46d1-9956-60383f61abc2\") " pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362291 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362361 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362464 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqwp\" (UniqueName: \"kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362509 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362545 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.362588 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.363484 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.363659 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.364022 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.364547 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.365768 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.416383 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.418242 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.431332 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.477178 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.551378 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqwp\" (UniqueName: \"kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp\") pod \"dnsmasq-dns-85ff748b95-57slp\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.580043 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.580129 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.580207 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nw6f\" (UniqueName: \"kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.580266 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.580617 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.585063 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.652227 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.667680 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerStarted","Data":"269b40b28acc24c912b35a7cfec98b13a709140a9694792ea8bdf200f743f419"} Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.688838 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.688961 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.688997 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.689052 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nw6f\" (UniqueName: \"kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.689099 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.689784 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.706910 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5697764-wvv6n" event={"ID":"6133f02e-6901-41cd-ac62-9450747a6d98","Type":"ContainerStarted","Data":"f51f69f52005d1b3eb8901ea6f49cc3c40967a0a6358e6f1b7d1e01aa96b31ad"} Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.723499 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.724730 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nw6f\" (UniqueName: \"kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.726359 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.741414 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data\") pod \"barbican-api-64fcf9448b-45l75\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.841223 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:52 crc kubenswrapper[4896]: I0126 16:00:52.888350 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8cfa23-5804-4f61-815b-287e23958ff9" path="/var/lib/kubelet/pods/8f8cfa23-5804-4f61-815b-287e23958ff9/volumes" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.037536 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.037752 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.449361 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bb7d4b5f9-6jmz5"] Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.561524 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.794850 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795182 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795216 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795264 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tfz5\" (UniqueName: \"kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795345 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795420 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.795449 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run\") pod \"e79e28c8-45cf-4fc6-a99e-59b024131415\" (UID: \"e79e28c8-45cf-4fc6-a99e-59b024131415\") " Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.796343 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.805949 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs" (OuterVolumeSpecName: "logs") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.868595 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerStarted","Data":"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2"} Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.876208 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" event={"ID":"4ab5e517-a751-433e-9503-db39609aa439","Type":"ContainerStarted","Data":"1a6684c4af836da7c270ca1779673f7c1bbc24717f6e95baa248a4973a441899"} Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.878812 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts" (OuterVolumeSpecName: "scripts") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.882008 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5" (OuterVolumeSpecName: "kube-api-access-5tfz5") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "kube-api-access-5tfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.898875 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.898912 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e79e28c8-45cf-4fc6-a99e-59b024131415-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.898925 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.898938 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tfz5\" (UniqueName: \"kubernetes.io/projected/e79e28c8-45cf-4fc6-a99e-59b024131415-kube-api-access-5tfz5\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.905742 4896 generic.go:334] "Generic (PLEG): container finished" podID="d4450b82-6d66-4109-8fec-6b979256d032" containerID="269b40b28acc24c912b35a7cfec98b13a709140a9694792ea8bdf200f743f419" exitCode=0 Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.905841 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerDied","Data":"269b40b28acc24c912b35a7cfec98b13a709140a9694792ea8bdf200f743f419"} Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.920702 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5697764-wvv6n" event={"ID":"6133f02e-6901-41cd-ac62-9450747a6d98","Type":"ContainerStarted","Data":"77077286e52b90162343e71ccc412f2996e98c2531651bd3f870a61dbce1452a"} Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.921719 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.921935 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.941682 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.941680 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e79e28c8-45cf-4fc6-a99e-59b024131415","Type":"ContainerDied","Data":"36509bbbff9408afe39100d884d6455a1142e23e0ca77344a4bdc5b77458e6c6"} Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.942775 4896 scope.go:117] "RemoveContainer" containerID="ff055f2de20d26f9dd6a6bb904ecd7f239ecdbccdfe8232ff499b9b94da38b83" Jan 26 16:00:53 crc kubenswrapper[4896]: I0126 16:00:53.951771 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.001146 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.002645 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data" (OuterVolumeSpecName: "config-data") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.038813 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77d5697764-wvv6n" podStartSLOduration=6.038786584 podStartE2EDuration="6.038786584s" podCreationTimestamp="2026-01-26 16:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:54.003318688 +0000 UTC m=+1611.785199081" watchObservedRunningTime="2026-01-26 16:00:54.038786584 +0000 UTC m=+1611.820666977" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.103604 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79e28c8-45cf-4fc6-a99e-59b024131415-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.128045 4896 scope.go:117] "RemoveContainer" containerID="b23c59ab68fc957f3fe183a19e9c191a646596732cf0a1d6e8de4083cc7a4c8a" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.178262 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4" (OuterVolumeSpecName: "glance") pod "e79e28c8-45cf-4fc6-a99e-59b024131415" (UID: "e79e28c8-45cf-4fc6-a99e-59b024131415"). InnerVolumeSpecName "pvc-032691af-a20f-4ded-a276-f85258d081f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.205465 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") on node \"crc\" " Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.233097 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-566d4946fd-fbmrv"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.242296 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wxhjt" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" probeResult="failure" output=< Jan 26 16:00:54 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:00:54 crc kubenswrapper[4896]: > Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.270898 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.271082 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-032691af-a20f-4ded-a276-f85258d081f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4") on node "crc" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.306951 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.333025 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.360662 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.407357 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:54 crc kubenswrapper[4896]: E0126 16:00:54.408512 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-log" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.408640 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-log" Jan 26 16:00:54 crc kubenswrapper[4896]: E0126 16:00:54.408778 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-httpd" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.408850 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-httpd" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.409171 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-log" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.409318 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" containerName="glance-httpd" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.410991 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.426445 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.427563 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.479337 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.517243 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.517561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.517700 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.517806 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.517929 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.518175 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkbwq\" (UniqueName: \"kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.518312 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.518441 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.530319 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.620694 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkbwq\" (UniqueName: \"kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.620767 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.620818 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.620936 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.620994 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.621027 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.621053 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.621081 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.629035 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.633969 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.635284 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.635320 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6ee5c5ace645a0437237edccec1152ed0b5c152b57bef8f765a8fb7bcea3897/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.641107 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: W0126 16:00:54.643715 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc21396d_5abd_42fb_b33c_5099769ea73f.slice/crio-737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e WatchSource:0}: Error finding container 737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e: Status 404 returned error can't find the container with id 737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.647539 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.650516 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkbwq\" (UniqueName: \"kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.662900 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.669253 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.688441 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.819588 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79e28c8-45cf-4fc6-a99e-59b024131415" path="/var/lib/kubelet/pods/e79e28c8-45cf-4fc6-a99e-59b024131415/volumes" Jan 26 16:00:54 crc kubenswrapper[4896]: I0126 16:00:54.820762 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " pod="openstack/glance-default-external-api-0" Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.012825 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerStarted","Data":"737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e"} Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.027057 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerStarted","Data":"036b14615f84e5f40755b8673bb371753c39aa4af51db85a7d9c42082c49db57"} Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.052928 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" event={"ID":"640faf58-91b8-46d1-9956-60383f61abc2","Type":"ContainerStarted","Data":"3aeecd6fd96a4d7d456675bb095988d5ecd5251d5cba87d13e21d65fba7b428c"} Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.109930 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.286501 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 16:00:55 crc kubenswrapper[4896]: I0126 16:00:55.300902 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.133392 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerStarted","Data":"3f55a80ef515b8848dd1812c1d377cc9066c9e86c7581abef97d343be1a42c4f"} Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.163728 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerStarted","Data":"d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4"} Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.169667 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerStarted","Data":"a0d98e3a654eadf33f85f1377f1b38965ab02e59b9066d7459ce725645bde209"} Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.208852 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerStarted","Data":"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9"} Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.228015 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwtqn" podStartSLOduration=6.786779722 podStartE2EDuration="11.227991164s" podCreationTimestamp="2026-01-26 16:00:45 +0000 UTC" firstStartedPulling="2026-01-26 16:00:50.269819726 +0000 UTC m=+1608.051700119" lastFinishedPulling="2026-01-26 16:00:54.711031168 +0000 UTC m=+1612.492911561" observedRunningTime="2026-01-26 16:00:56.207633072 +0000 UTC m=+1613.989513465" watchObservedRunningTime="2026-01-26 16:00:56.227991164 +0000 UTC m=+1614.009871557" Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.228096 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.289540 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.289519624 podStartE2EDuration="8.289519624s" podCreationTimestamp="2026-01-26 16:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:56.279672381 +0000 UTC m=+1614.061552774" watchObservedRunningTime="2026-01-26 16:00:56.289519624 +0000 UTC m=+1614.071400017" Jan 26 16:00:56 crc kubenswrapper[4896]: I0126 16:00:56.353526 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.212185 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b66574cb6-d2c7c"] Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.215164 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.227648 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.227725 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.232479 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b66574cb6-d2c7c"] Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.273204 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerStarted","Data":"7a6b1609775c9d916058bff704f5dcf8bbb6a7d1dcf0cfa730d62001255b6deb"} Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.274701 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.274743 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.281650 4896 generic.go:334] "Generic (PLEG): container finished" podID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerID="a0d98e3a654eadf33f85f1377f1b38965ab02e59b9066d7459ce725645bde209" exitCode=0 Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.281757 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerDied","Data":"a0d98e3a654eadf33f85f1377f1b38965ab02e59b9066d7459ce725645bde209"} Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.281794 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerStarted","Data":"9f3b8a8f89855b5c7a729e508562463e915cdbd5696bdf369122371b357cfa43"} Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.283368 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.293648 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerStarted","Data":"3fd77197850c4a430f625fdf606735b91ab22c4f0b2cd06070c622166b4e5d52"} Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.336966 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64fcf9448b-45l75" podStartSLOduration=5.336941914 podStartE2EDuration="5.336941914s" podCreationTimestamp="2026-01-26 16:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:57.323507362 +0000 UTC m=+1615.105387755" watchObservedRunningTime="2026-01-26 16:00:57.336941914 +0000 UTC m=+1615.118822307" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.392466 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrtx\" (UniqueName: \"kubernetes.io/projected/4207f20a-c3a4-42fe-a6d2-09314620e63e-kube-api-access-pqrtx\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.392873 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-combined-ca-bundle\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.393090 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data-custom\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.393207 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-internal-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.393358 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.393482 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4207f20a-c3a4-42fe-a6d2-09314620e63e-logs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.393624 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-public-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.399787 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-57slp" podStartSLOduration=6.399769366 podStartE2EDuration="6.399769366s" podCreationTimestamp="2026-01-26 16:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:00:57.354034606 +0000 UTC m=+1615.135914999" watchObservedRunningTime="2026-01-26 16:00:57.399769366 +0000 UTC m=+1615.181649759" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.496327 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrtx\" (UniqueName: \"kubernetes.io/projected/4207f20a-c3a4-42fe-a6d2-09314620e63e-kube-api-access-pqrtx\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.496708 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-combined-ca-bundle\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.497028 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data-custom\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.497203 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-internal-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.498592 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.499809 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4207f20a-c3a4-42fe-a6d2-09314620e63e-logs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.499992 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-public-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.501732 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4207f20a-c3a4-42fe-a6d2-09314620e63e-logs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.514868 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.515275 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-internal-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.515620 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-public-tls-certs\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.516422 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-config-data-custom\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.518124 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4207f20a-c3a4-42fe-a6d2-09314620e63e-combined-ca-bundle\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.520481 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrtx\" (UniqueName: \"kubernetes.io/projected/4207f20a-c3a4-42fe-a6d2-09314620e63e-kube-api-access-pqrtx\") pod \"barbican-api-b66574cb6-d2c7c\" (UID: \"4207f20a-c3a4-42fe-a6d2-09314620e63e\") " pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:57 crc kubenswrapper[4896]: I0126 16:00:57.581370 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:00:58 crc kubenswrapper[4896]: I0126 16:00:58.310158 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerStarted","Data":"b41bb5e603893cc3aa9c16e94a3c02abfb8c32fb59fca28fb1f3b05a94ec03b6"} Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.252142 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.627833 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.628250 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.697466 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.742858 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:00:59 crc kubenswrapper[4896]: I0126 16:00:59.909484 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b66574cb6-d2c7c"] Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.167118 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490721-dbd8t"] Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.168971 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.185064 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-dbd8t"] Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.292428 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.292520 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.292652 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.292738 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmb7\" (UniqueName: \"kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.361926 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" event={"ID":"640faf58-91b8-46d1-9956-60383f61abc2","Type":"ContainerStarted","Data":"cb4d3e5e8a132a9be403db3852b93ae2deb81c7d26afa874b726811e70aced4a"} Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.368841 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66574cb6-d2c7c" event={"ID":"4207f20a-c3a4-42fe-a6d2-09314620e63e","Type":"ContainerStarted","Data":"fe0c0dbc16000e1fb0cc98626aba88975a9ed8c1595c2621fd666f9286a6b8ff"} Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.368887 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66574cb6-d2c7c" event={"ID":"4207f20a-c3a4-42fe-a6d2-09314620e63e","Type":"ContainerStarted","Data":"736d7e5ef45b62d2104a5c8ae827a83924f2d748afb41c66adcf90de3850842e"} Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.389649 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" event={"ID":"4ab5e517-a751-433e-9503-db39609aa439","Type":"ContainerStarted","Data":"d9935fb579b1b430cb273b86c849193b88dca4bdd06c51549861e4c4eca49d89"} Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.390223 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.393184 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.440485 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.440925 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnmb7\" (UniqueName: \"kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.441094 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.441215 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.448265 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.448238012 podStartE2EDuration="6.448238012s" podCreationTimestamp="2026-01-26 16:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:00.407503997 +0000 UTC m=+1618.189384400" watchObservedRunningTime="2026-01-26 16:01:00.448238012 +0000 UTC m=+1618.230118405" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.452510 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.453318 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.454311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.471345 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnmb7\" (UniqueName: \"kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7\") pod \"keystone-cron-29490721-dbd8t\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:00 crc kubenswrapper[4896]: I0126 16:01:00.520200 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:01 crc kubenswrapper[4896]: W0126 16:01:01.190937 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a30fdc4_b069_4bdf_b901_8f382050037b.slice/crio-a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9 WatchSource:0}: Error finding container a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9: Status 404 returned error can't find the container with id a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9 Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.196271 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-dbd8t"] Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.358900 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d5697764-wvv6n" Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.464612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66574cb6-d2c7c" event={"ID":"4207f20a-c3a4-42fe-a6d2-09314620e63e","Type":"ContainerStarted","Data":"40b9715d3b715d556670c5aa216c83062e0efe3f61eb41e6db6e34ad3a1abf6c"} Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.466101 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.466147 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.476534 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-dbd8t" event={"ID":"5a30fdc4-b069-4bdf-b901-8f382050037b","Type":"ContainerStarted","Data":"a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9"} Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.510462 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b66574cb6-d2c7c" podStartSLOduration=4.510445166 podStartE2EDuration="4.510445166s" podCreationTimestamp="2026-01-26 16:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:01.509660756 +0000 UTC m=+1619.291541149" watchObservedRunningTime="2026-01-26 16:01:01.510445166 +0000 UTC m=+1619.292325559" Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.513490 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerStarted","Data":"54f84ac432b72822fa3bca3258b874d0b0748c5abd656e9d98520dff062c7ac3"} Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.561061 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" event={"ID":"4ab5e517-a751-433e-9503-db39609aa439","Type":"ContainerStarted","Data":"23e7c499997f1844878dbd22c0fd260e36e8a44e4a342391ff470a9cd6f654c1"} Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.584618 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" event={"ID":"640faf58-91b8-46d1-9956-60383f61abc2","Type":"ContainerStarted","Data":"20baaf55a8a34a4ad8cde278d29319826a909d60528786c9a32abbfa14d31ac8"} Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.601882 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-bb7d4b5f9-6jmz5" podStartSLOduration=4.799018286 podStartE2EDuration="10.601861989s" podCreationTimestamp="2026-01-26 16:00:51 +0000 UTC" firstStartedPulling="2026-01-26 16:00:53.520429411 +0000 UTC m=+1611.302309804" lastFinishedPulling="2026-01-26 16:00:59.323273114 +0000 UTC m=+1617.105153507" observedRunningTime="2026-01-26 16:01:01.590407948 +0000 UTC m=+1619.372288351" watchObservedRunningTime="2026-01-26 16:01:01.601861989 +0000 UTC m=+1619.383742382" Jan 26 16:01:01 crc kubenswrapper[4896]: I0126 16:01:01.638317 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-566d4946fd-fbmrv" podStartSLOduration=5.835164815 podStartE2EDuration="10.638283975s" podCreationTimestamp="2026-01-26 16:00:51 +0000 UTC" firstStartedPulling="2026-01-26 16:00:54.530045878 +0000 UTC m=+1612.311926271" lastFinishedPulling="2026-01-26 16:00:59.333165028 +0000 UTC m=+1617.115045431" observedRunningTime="2026-01-26 16:01:01.618949184 +0000 UTC m=+1619.400829577" watchObservedRunningTime="2026-01-26 16:01:01.638283975 +0000 UTC m=+1619.420164368" Jan 26 16:01:02 crc kubenswrapper[4896]: I0126 16:01:02.603368 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-dbd8t" event={"ID":"5a30fdc4-b069-4bdf-b901-8f382050037b","Type":"ContainerStarted","Data":"b411af1c48efa9c037635b1761f49612180736b6ca83293a28e3a876166b91d7"} Jan 26 16:01:02 crc kubenswrapper[4896]: I0126 16:01:02.659763 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:01:02 crc kubenswrapper[4896]: I0126 16:01:02.714182 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490721-dbd8t" podStartSLOduration=2.714153217 podStartE2EDuration="2.714153217s" podCreationTimestamp="2026-01-26 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:02.657934868 +0000 UTC m=+1620.439815271" watchObservedRunningTime="2026-01-26 16:01:02.714153217 +0000 UTC m=+1620.496033610" Jan 26 16:01:02 crc kubenswrapper[4896]: I0126 16:01:02.914561 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:01:02 crc kubenswrapper[4896]: I0126 16:01:02.915046 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" containerID="cri-o://04db21ccbca74daecbec9d15e26110f8d4a3b9f7d53b5587c897d1a92e5626d8" gracePeriod=10 Jan 26 16:01:03 crc kubenswrapper[4896]: I0126 16:01:03.122295 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:01:03 crc kubenswrapper[4896]: I0126 16:01:03.185179 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:01:03 crc kubenswrapper[4896]: I0126 16:01:03.625850 4896 generic.go:334] "Generic (PLEG): container finished" podID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerID="04db21ccbca74daecbec9d15e26110f8d4a3b9f7d53b5587c897d1a92e5626d8" exitCode=0 Jan 26 16:01:03 crc kubenswrapper[4896]: I0126 16:01:03.625910 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" event={"ID":"89998198-666d-4e9c-9213-8dd8fdcdd0d9","Type":"ContainerDied","Data":"04db21ccbca74daecbec9d15e26110f8d4a3b9f7d53b5587c897d1a92e5626d8"} Jan 26 16:01:03 crc kubenswrapper[4896]: I0126 16:01:03.762736 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.317015 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.344436 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.658456 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.660347 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5b95478f-8qxzd" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-api" containerID="cri-o://f05b41cc8dd83a60275733bd2dc2acd106d94635dfff7728a8604bf74efa0d4c" gracePeriod=30 Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.660719 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5b95478f-8qxzd" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" containerID="cri-o://4e264dbcad01037f9351410055d4d3e148e3a2943c48af41afb8d55c4686ac85" gracePeriod=30 Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.662645 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxhjt" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" containerID="cri-o://6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" gracePeriod=2 Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.709743 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5f5b95478f-8qxzd" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.198:9696/\": EOF" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.720968 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54d4db4449-vlmh7"] Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.733239 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.811664 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54d4db4449-vlmh7"] Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.837850 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvq8\" (UniqueName: \"kubernetes.io/projected/cf40a9b0-1e7e-43c9-afa9-571170cc8285-kube-api-access-blvq8\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.837944 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.838025 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-public-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.838329 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-internal-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.838444 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-ovndb-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.854998 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-httpd-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.855102 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-combined-ca-bundle\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958067 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958155 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-public-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958309 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-internal-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958368 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-ovndb-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958499 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-httpd-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958532 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-combined-ca-bundle\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.958606 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvq8\" (UniqueName: \"kubernetes.io/projected/cf40a9b0-1e7e-43c9-afa9-571170cc8285-kube-api-access-blvq8\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.968018 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-internal-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.968018 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-public-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.979931 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.980126 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-ovndb-tls-certs\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.984019 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvq8\" (UniqueName: \"kubernetes.io/projected/cf40a9b0-1e7e-43c9-afa9-571170cc8285-kube-api-access-blvq8\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:04 crc kubenswrapper[4896]: I0126 16:01:04.984367 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-httpd-config\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.023326 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf40a9b0-1e7e-43c9-afa9-571170cc8285-combined-ca-bundle\") pod \"neutron-54d4db4449-vlmh7\" (UID: \"cf40a9b0-1e7e-43c9-afa9-571170cc8285\") " pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.111378 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.112398 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.112820 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.183899 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.246875 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.676100 4896 generic.go:334] "Generic (PLEG): container finished" podID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerID="6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" exitCode=0 Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.676188 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerDied","Data":"6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353"} Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.676435 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.676487 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.958054 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:05 crc kubenswrapper[4896]: I0126 16:01:05.958101 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.027945 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.697575 4896 generic.go:334] "Generic (PLEG): container finished" podID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerID="4e264dbcad01037f9351410055d4d3e148e3a2943c48af41afb8d55c4686ac85" exitCode=0 Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.697648 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerDied","Data":"4e264dbcad01037f9351410055d4d3e148e3a2943c48af41afb8d55c4686ac85"} Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.777819 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.895836 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.895846 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.946083 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:01:06 crc kubenswrapper[4896]: I0126 16:01:06.967223 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.168733 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.712878 4896 generic.go:334] "Generic (PLEG): container finished" podID="5a30fdc4-b069-4bdf-b901-8f382050037b" containerID="b411af1c48efa9c037635b1761f49612180736b6ca83293a28e3a876166b91d7" exitCode=0 Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.713404 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-dbd8t" event={"ID":"5a30fdc4-b069-4bdf-b901-8f382050037b","Type":"ContainerDied","Data":"b411af1c48efa9c037635b1761f49612180736b6ca83293a28e3a876166b91d7"} Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.742500 4896 generic.go:334] "Generic (PLEG): container finished" podID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerID="f05b41cc8dd83a60275733bd2dc2acd106d94635dfff7728a8604bf74efa0d4c" exitCode=0 Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.743307 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerDied","Data":"f05b41cc8dd83a60275733bd2dc2acd106d94635dfff7728a8604bf74efa0d4c"} Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.757375 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.757590 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:01:07 crc kubenswrapper[4896]: I0126 16:01:07.762032 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.367452 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5f5b95478f-8qxzd" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.198:9696/\": dial tcp 10.217.0.198:9696: connect: connection refused" Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.773860 4896 generic.go:334] "Generic (PLEG): container finished" podID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" containerID="2062042a5e34295a16e9261ee5602c003cb198f6c68330944d0b5b1e061b11f9" exitCode=0 Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.791962 4896 generic.go:334] "Generic (PLEG): container finished" podID="590e8b81-a793-4143-9b0e-f2afb348dd91" containerID="aeb3ca13e42994f58c03408cbfac03b951d5ff3efa906e5ea149f45402f5efd8" exitCode=0 Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.792191 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwtqn" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="registry-server" containerID="cri-o://d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" gracePeriod=2 Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.834443 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7pf4j" event={"ID":"c8f4e140-bab4-479c-a97b-4a5aa49a47d3","Type":"ContainerDied","Data":"2062042a5e34295a16e9261ee5602c003cb198f6c68330944d0b5b1e061b11f9"} Jan 26 16:01:08 crc kubenswrapper[4896]: I0126 16:01:08.834492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l5784" event={"ID":"590e8b81-a793-4143-9b0e-f2afb348dd91","Type":"ContainerDied","Data":"aeb3ca13e42994f58c03408cbfac03b951d5ff3efa906e5ea149f45402f5efd8"} Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.095929 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.096354 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.113189 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.343265 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.817451 4896 generic.go:334] "Generic (PLEG): container finished" podID="d4450b82-6d66-4109-8fec-6b979256d032" containerID="d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" exitCode=0 Jan 26 16:01:09 crc kubenswrapper[4896]: I0126 16:01:09.817549 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerDied","Data":"d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4"} Jan 26 16:01:10 crc kubenswrapper[4896]: I0126 16:01:10.805075 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.061167 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b66574cb6-d2c7c" Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.124187 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.124438 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" containerID="cri-o://3f55a80ef515b8848dd1812c1d377cc9066c9e86c7581abef97d343be1a42c4f" gracePeriod=30 Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.125028 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" containerID="cri-o://7a6b1609775c9d916058bff704f5dcf8bbb6a7d1dcf0cfa730d62001255b6deb" gracePeriod=30 Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.870621 4896 generic.go:334] "Generic (PLEG): container finished" podID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerID="3f55a80ef515b8848dd1812c1d377cc9066c9e86c7581abef97d343be1a42c4f" exitCode=143 Jan 26 16:01:11 crc kubenswrapper[4896]: I0126 16:01:11.870705 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerDied","Data":"3f55a80ef515b8848dd1812c1d377cc9066c9e86c7581abef97d343be1a42c4f"} Jan 26 16:01:13 crc kubenswrapper[4896]: E0126 16:01:13.031926 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353 is running failed: container process not found" containerID="6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:13 crc kubenswrapper[4896]: E0126 16:01:13.032414 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353 is running failed: container process not found" containerID="6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:13 crc kubenswrapper[4896]: E0126 16:01:13.032883 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353 is running failed: container process not found" containerID="6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:13 crc kubenswrapper[4896]: E0126 16:01:13.032914 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-wxhjt" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.319770 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:38730->10.217.0.206:9311: read: connection reset by peer" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.320614 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:38728->10.217.0.206:9311: read: connection reset by peer" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.343333 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.346852 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.463758 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7pf4j" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.474939 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.497546 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l5784" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.567780 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.567935 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys\") pod \"5a30fdc4-b069-4bdf-b901-8f382050037b\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.567980 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568105 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle\") pod \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568207 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data\") pod \"5a30fdc4-b069-4bdf-b901-8f382050037b\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568288 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568404 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle\") pod \"5a30fdc4-b069-4bdf-b901-8f382050037b\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568469 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data\") pod \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568501 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568530 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr6x8\" (UniqueName: \"kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8\") pod \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\" (UID: \"c8f4e140-bab4-479c-a97b-4a5aa49a47d3\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568564 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnmb7\" (UniqueName: \"kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7\") pod \"5a30fdc4-b069-4bdf-b901-8f382050037b\" (UID: \"5a30fdc4-b069-4bdf-b901-8f382050037b\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568647 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.568702 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle\") pod \"590e8b81-a793-4143-9b0e-f2afb348dd91\" (UID: \"590e8b81-a793-4143-9b0e-f2afb348dd91\") " Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.570186 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.581645 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a30fdc4-b069-4bdf-b901-8f382050037b" (UID: "5a30fdc4-b069-4bdf-b901-8f382050037b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.584956 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8" (OuterVolumeSpecName: "kube-api-access-dr6x8") pod "c8f4e140-bab4-479c-a97b-4a5aa49a47d3" (UID: "c8f4e140-bab4-479c-a97b-4a5aa49a47d3"). InnerVolumeSpecName "kube-api-access-dr6x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.587015 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.591420 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts" (OuterVolumeSpecName: "scripts") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.630383 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5" (OuterVolumeSpecName: "kube-api-access-9bbl5") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "kube-api-access-9bbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.649318 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7" (OuterVolumeSpecName: "kube-api-access-rnmb7") pod "5a30fdc4-b069-4bdf-b901-8f382050037b" (UID: "5a30fdc4-b069-4bdf-b901-8f382050037b"). InnerVolumeSpecName "kube-api-access-rnmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.673958 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674293 4896 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674397 4896 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674482 4896 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590e8b81-a793-4143-9b0e-f2afb348dd91-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674610 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbl5\" (UniqueName: \"kubernetes.io/projected/590e8b81-a793-4143-9b0e-f2afb348dd91-kube-api-access-9bbl5\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674714 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr6x8\" (UniqueName: \"kubernetes.io/projected/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-kube-api-access-dr6x8\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.674805 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnmb7\" (UniqueName: \"kubernetes.io/projected/5a30fdc4-b069-4bdf-b901-8f382050037b-kube-api-access-rnmb7\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.698833 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a30fdc4-b069-4bdf-b901-8f382050037b" (UID: "5a30fdc4-b069-4bdf-b901-8f382050037b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.714938 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8f4e140-bab4-479c-a97b-4a5aa49a47d3" (UID: "c8f4e140-bab4-479c-a97b-4a5aa49a47d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.726622 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.739128 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data" (OuterVolumeSpecName: "config-data") pod "5a30fdc4-b069-4bdf-b901-8f382050037b" (UID: "5a30fdc4-b069-4bdf-b901-8f382050037b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.805886 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.806249 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.806288 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.806300 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a30fdc4-b069-4bdf-b901-8f382050037b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.807890 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data" (OuterVolumeSpecName: "config-data") pod "c8f4e140-bab4-479c-a97b-4a5aa49a47d3" (UID: "c8f4e140-bab4-479c-a97b-4a5aa49a47d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.808041 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data" (OuterVolumeSpecName: "config-data") pod "590e8b81-a793-4143-9b0e-f2afb348dd91" (UID: "590e8b81-a793-4143-9b0e-f2afb348dd91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.908062 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590e8b81-a793-4143-9b0e-f2afb348dd91-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.908096 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f4e140-bab4-479c-a97b-4a5aa49a47d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.918286 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7pf4j" event={"ID":"c8f4e140-bab4-479c-a97b-4a5aa49a47d3","Type":"ContainerDied","Data":"96ab72afd67da2a10352899cd87127ecbbad094f95fc2a543f5d9326cb40a2f4"} Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.918322 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ab72afd67da2a10352899cd87127ecbbad094f95fc2a543f5d9326cb40a2f4" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.918356 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7pf4j" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.924985 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l5784" event={"ID":"590e8b81-a793-4143-9b0e-f2afb348dd91","Type":"ContainerDied","Data":"848c998855445957774fba16231abd3b3d98dfad9dcf3ae4e475db0fa24c6db9"} Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.925016 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l5784" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.925037 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848c998855445957774fba16231abd3b3d98dfad9dcf3ae4e475db0fa24c6db9" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.931492 4896 generic.go:334] "Generic (PLEG): container finished" podID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerID="7a6b1609775c9d916058bff704f5dcf8bbb6a7d1dcf0cfa730d62001255b6deb" exitCode=0 Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.931613 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerDied","Data":"7a6b1609775c9d916058bff704f5dcf8bbb6a7d1dcf0cfa730d62001255b6deb"} Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.933841 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-dbd8t" event={"ID":"5a30fdc4-b069-4bdf-b901-8f382050037b","Type":"ContainerDied","Data":"a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9"} Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.933869 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9" Jan 26 16:01:14 crc kubenswrapper[4896]: I0126 16:01:14.933951 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-dbd8t" Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.115545 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod590e8b81_a793_4143_9b0e_f2afb348dd91.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod590e8b81_a793_4143_9b0e_f2afb348dd91.slice/crio-848c998855445957774fba16231abd3b3d98dfad9dcf3ae4e475db0fa24c6db9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a30fdc4_b069_4bdf_b901_8f382050037b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a30fdc4_b069_4bdf_b901_8f382050037b.slice/crio-a1b6d9f6f4b7408ed953bbfb0393f9c3a2f2f0339995a9e5f201c891a2c865d9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f4e140_bab4_479c_a97b_4a5aa49a47d3.slice/crio-96ab72afd67da2a10352899cd87127ecbbad094f95fc2a543f5d9326cb40a2f4\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.780801 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.781668 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" containerName="heat-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.781686 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" containerName="heat-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.781732 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" containerName="cinder-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.781743 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" containerName="cinder-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.781763 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a30fdc4-b069-4bdf-b901-8f382050037b" containerName="keystone-cron" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.781771 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a30fdc4-b069-4bdf-b901-8f382050037b" containerName="keystone-cron" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.782243 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a30fdc4-b069-4bdf-b901-8f382050037b" containerName="keystone-cron" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.782255 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" containerName="cinder-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.782269 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" containerName="heat-db-sync" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.784680 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.793644 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6tnp9" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.793954 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.794044 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.797671 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.806109 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832127 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832213 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832399 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832424 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832473 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.832652 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62qdg\" (UniqueName: \"kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.887570 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.889866 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.919929 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934180 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934232 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934260 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934354 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934371 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934402 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934419 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934460 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvtj\" (UniqueName: \"kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934507 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62qdg\" (UniqueName: \"kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934548 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934623 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.934672 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.940973 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.948637 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.950632 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.950683 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.951294 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.967743 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4 is running failed: container process not found" containerID="d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:15 crc kubenswrapper[4896]: I0126 16:01:15.971962 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62qdg\" (UniqueName: \"kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg\") pod \"cinder-scheduler-0\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.988103 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4 is running failed: container process not found" containerID="d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.993057 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4 is running failed: container process not found" containerID="d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 16:01:15 crc kubenswrapper[4896]: E0126 16:01:15.993149 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nwtqn" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="registry-server" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.037624 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.037759 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.037804 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.037845 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.038017 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.038078 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvtj\" (UniqueName: \"kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.040605 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.040892 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.044146 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.051084 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.052450 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.084294 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvtj\" (UniqueName: \"kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj\") pod \"dnsmasq-dns-5c9776ccc5-5v2tk\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.110212 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.166233 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.169130 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.172168 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.198792 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.229158 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.249083 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.249461 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.249715 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.250907 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.251012 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.251109 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcltp\" (UniqueName: \"kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.251346 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353205 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353287 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353306 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353366 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353444 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353478 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353521 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcltp\" (UniqueName: \"kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.353930 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.355372 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.357899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.359821 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.360131 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.361306 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.380726 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcltp\" (UniqueName: \"kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp\") pod \"cinder-api-0\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.500187 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.674739 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765014 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765058 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765219 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765313 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765366 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd78w\" (UniqueName: \"kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765393 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.765612 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config\") pod \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\" (UID: \"bdbdc1e0-1624-4300-91bb-1bfe556567c6\") " Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.775890 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.785005 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w" (OuterVolumeSpecName: "kube-api-access-fd78w") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "kube-api-access-fd78w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.865830 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.870363 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.870418 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd78w\" (UniqueName: \"kubernetes.io/projected/bdbdc1e0-1624-4300-91bb-1bfe556567c6-kube-api-access-fd78w\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.870438 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.929259 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config" (OuterVolumeSpecName: "config") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.932436 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.958172 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.976522 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.976599 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.976617 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.995647 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bdbdc1e0-1624-4300-91bb-1bfe556567c6" (UID: "bdbdc1e0-1624-4300-91bb-1bfe556567c6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.999562 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b95478f-8qxzd" event={"ID":"bdbdc1e0-1624-4300-91bb-1bfe556567c6","Type":"ContainerDied","Data":"49095e0e8aef89c3e8881fbd43394fbf067f8abc795c52f41881f52df9ff18a2"} Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.999634 4896 scope.go:117] "RemoveContainer" containerID="4e264dbcad01037f9351410055d4d3e148e3a2943c48af41afb8d55c4686ac85" Jan 26 16:01:16 crc kubenswrapper[4896]: I0126 16:01:16.999813 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b95478f-8qxzd" Jan 26 16:01:17 crc kubenswrapper[4896]: I0126 16:01:17.050431 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:01:17 crc kubenswrapper[4896]: I0126 16:01:17.064799 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f5b95478f-8qxzd"] Jan 26 16:01:17 crc kubenswrapper[4896]: I0126 16:01:17.079059 4896 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdbdc1e0-1624-4300-91bb-1bfe556567c6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:17 crc kubenswrapper[4896]: E0126 16:01:17.665490 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 26 16:01:17 crc kubenswrapper[4896]: E0126 16:01:17.665986 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c8w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(41bebe25-46fb-4c06-9977-e39a32407c42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 16:01:17 crc kubenswrapper[4896]: E0126 16:01:17.667376 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" Jan 26 16:01:17 crc kubenswrapper[4896]: I0126 16:01:17.803157 4896 scope.go:117] "RemoveContainer" containerID="f05b41cc8dd83a60275733bd2dc2acd106d94635dfff7728a8604bf74efa0d4c" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.027318 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.031802 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.040731 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.047924 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxhjt" event={"ID":"c356aa29-1407-47fb-80f0-a7f1b4a58919","Type":"ContainerDied","Data":"b15a691bcf0f1a74e5617f83dc224d485c6a0397ae705d0515c5c49995b0be55"} Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.048049 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxhjt" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.060284 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fcf9448b-45l75" event={"ID":"bc21396d-5abd-42fb-b33c-5099769ea73f","Type":"ContainerDied","Data":"737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e"} Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.060336 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737c2d991aca8339995d1ff43d7ab79a72be2bb32a5d13cc8cec5cf23e4b994e" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.065901 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.080938 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwtqn" event={"ID":"d4450b82-6d66-4109-8fec-6b979256d032","Type":"ContainerDied","Data":"e02be973a61d9a0f2b82f3495860278160d9a4682daf7eea8d3c2353489a0fe0"} Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.081064 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwtqn" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.095624 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="ceilometer-notification-agent" containerID="cri-o://7d22bf68b6d9f2f702185d0edc40b21d55ac24b50a4ed6bd26bac70cf984f96b" gracePeriod=30 Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.095764 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.096425 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-jb4fq" event={"ID":"89998198-666d-4e9c-9213-8dd8fdcdd0d9","Type":"ContainerDied","Data":"4bda0e115f21eafe5bd1b2c706a7027cd8e8486ad52410e43467cc2b27f2eed7"} Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.096501 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="sg-core" containerID="cri-o://4d4af3a94d447111c7c6c7fc45bdc5fddb1660c1ccb8ad22497fe272ff4714a3" gracePeriod=30 Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.111368 4896 scope.go:117] "RemoveContainer" containerID="6e618c9af4f492d6dbf1fbd5aa14ddc702a7c025a28f0d5dae6079e8478b3353" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114781 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114843 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle\") pod \"bc21396d-5abd-42fb-b33c-5099769ea73f\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114865 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114888 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nw6f\" (UniqueName: \"kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f\") pod \"bc21396d-5abd-42fb-b33c-5099769ea73f\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114910 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khrcj\" (UniqueName: \"kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.114982 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j775b\" (UniqueName: \"kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b\") pod \"d4450b82-6d66-4109-8fec-6b979256d032\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115065 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content\") pod \"c356aa29-1407-47fb-80f0-a7f1b4a58919\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115192 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfxt\" (UniqueName: \"kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt\") pod \"c356aa29-1407-47fb-80f0-a7f1b4a58919\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115227 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities\") pod \"c356aa29-1407-47fb-80f0-a7f1b4a58919\" (UID: \"c356aa29-1407-47fb-80f0-a7f1b4a58919\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115285 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities\") pod \"d4450b82-6d66-4109-8fec-6b979256d032\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115317 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data\") pod \"bc21396d-5abd-42fb-b33c-5099769ea73f\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115346 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115405 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom\") pod \"bc21396d-5abd-42fb-b33c-5099769ea73f\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115534 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content\") pod \"d4450b82-6d66-4109-8fec-6b979256d032\" (UID: \"d4450b82-6d66-4109-8fec-6b979256d032\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115595 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115632 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc\") pod \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\" (UID: \"89998198-666d-4e9c-9213-8dd8fdcdd0d9\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.115665 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs\") pod \"bc21396d-5abd-42fb-b33c-5099769ea73f\" (UID: \"bc21396d-5abd-42fb-b33c-5099769ea73f\") " Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.121267 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs" (OuterVolumeSpecName: "logs") pod "bc21396d-5abd-42fb-b33c-5099769ea73f" (UID: "bc21396d-5abd-42fb-b33c-5099769ea73f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.140628 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bc21396d-5abd-42fb-b33c-5099769ea73f" (UID: "bc21396d-5abd-42fb-b33c-5099769ea73f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.142024 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities" (OuterVolumeSpecName: "utilities") pod "c356aa29-1407-47fb-80f0-a7f1b4a58919" (UID: "c356aa29-1407-47fb-80f0-a7f1b4a58919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.151791 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities" (OuterVolumeSpecName: "utilities") pod "d4450b82-6d66-4109-8fec-6b979256d032" (UID: "d4450b82-6d66-4109-8fec-6b979256d032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.195485 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c356aa29-1407-47fb-80f0-a7f1b4a58919" (UID: "c356aa29-1407-47fb-80f0-a7f1b4a58919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.200957 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4450b82-6d66-4109-8fec-6b979256d032" (UID: "d4450b82-6d66-4109-8fec-6b979256d032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.203789 4896 scope.go:117] "RemoveContainer" containerID="ba08707ddb693337e21a4104a25eb7861d014e9eac171131f75873a407d3cecd" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219175 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219207 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c356aa29-1407-47fb-80f0-a7f1b4a58919-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219219 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219230 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219241 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4450b82-6d66-4109-8fec-6b979256d032-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.219253 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc21396d-5abd-42fb-b33c-5099769ea73f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.225637 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b" (OuterVolumeSpecName: "kube-api-access-j775b") pod "d4450b82-6d66-4109-8fec-6b979256d032" (UID: "d4450b82-6d66-4109-8fec-6b979256d032"). InnerVolumeSpecName "kube-api-access-j775b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.235968 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt" (OuterVolumeSpecName: "kube-api-access-5bfxt") pod "c356aa29-1407-47fb-80f0-a7f1b4a58919" (UID: "c356aa29-1407-47fb-80f0-a7f1b4a58919"). InnerVolumeSpecName "kube-api-access-5bfxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.236052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj" (OuterVolumeSpecName: "kube-api-access-khrcj") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "kube-api-access-khrcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.236339 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f" (OuterVolumeSpecName: "kube-api-access-7nw6f") pod "bc21396d-5abd-42fb-b33c-5099769ea73f" (UID: "bc21396d-5abd-42fb-b33c-5099769ea73f"). InnerVolumeSpecName "kube-api-access-7nw6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.250759 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc21396d-5abd-42fb-b33c-5099769ea73f" (UID: "bc21396d-5abd-42fb-b33c-5099769ea73f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.286024 4896 scope.go:117] "RemoveContainer" containerID="b5a74b1646d77b75b934fbbd487ab5320b58a23d4e90ba362ddf2a697cd1128f" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.305214 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.335713 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j775b\" (UniqueName: \"kubernetes.io/projected/d4450b82-6d66-4109-8fec-6b979256d032-kube-api-access-j775b\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.335752 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bfxt\" (UniqueName: \"kubernetes.io/projected/c356aa29-1407-47fb-80f0-a7f1b4a58919-kube-api-access-5bfxt\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.335772 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.335790 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nw6f\" (UniqueName: \"kubernetes.io/projected/bc21396d-5abd-42fb-b33c-5099769ea73f-kube-api-access-7nw6f\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.335807 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khrcj\" (UniqueName: \"kubernetes.io/projected/89998198-666d-4e9c-9213-8dd8fdcdd0d9-kube-api-access-khrcj\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.351264 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data" (OuterVolumeSpecName: "config-data") pod "bc21396d-5abd-42fb-b33c-5099769ea73f" (UID: "bc21396d-5abd-42fb-b33c-5099769ea73f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.376492 4896 scope.go:117] "RemoveContainer" containerID="d9e9f7b4028cec78a1317071a48cccfd0903a09d7df043564c860272c3ca34e4" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.377099 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config" (OuterVolumeSpecName: "config") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.386375 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.442857 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc21396d-5abd-42fb-b33c-5099769ea73f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.442887 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.442898 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.501143 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.519805 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.519867 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89998198-666d-4e9c-9213-8dd8fdcdd0d9" (UID: "89998198-666d-4e9c-9213-8dd8fdcdd0d9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.546454 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.546501 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.546518 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89998198-666d-4e9c-9213-8dd8fdcdd0d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.654103 4896 scope.go:117] "RemoveContainer" containerID="269b40b28acc24c912b35a7cfec98b13a709140a9694792ea8bdf200f743f419" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.678673 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.737126 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwtqn"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.762881 4896 scope.go:117] "RemoveContainer" containerID="55c633f65061a1c9661f12ac7b71f3b143ceb77e6cfd314e88254ba4e1b3adf9" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.815971 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.822959 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:01:18 crc kubenswrapper[4896]: W0126 16:01:18.822174 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08dd6673_2fbb_4bb0_ab7b_5b441d18684d.slice/crio-f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820 WatchSource:0}: Error finding container f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820: Status 404 returned error can't find the container with id f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820 Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.831786 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" path="/var/lib/kubelet/pods/bdbdc1e0-1624-4300-91bb-1bfe556567c6/volumes" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.839596 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4450b82-6d66-4109-8fec-6b979256d032" path="/var/lib/kubelet/pods/d4450b82-6d66-4109-8fec-6b979256d032/volumes" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.841387 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54d4db4449-vlmh7"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.856176 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.867682 4896 scope.go:117] "RemoveContainer" containerID="04db21ccbca74daecbec9d15e26110f8d4a3b9f7d53b5587c897d1a92e5626d8" Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.893385 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxhjt"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.938537 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.973355 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.984915 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:01:18 crc kubenswrapper[4896]: I0126 16:01:18.986775 4896 scope.go:117] "RemoveContainer" containerID="72ac40e8fd004b566ef52984493eb2acc326d3c1b0b5fd9b2dd57ecccf830993" Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.001055 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-jb4fq"] Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.010236 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.120239 4896 generic.go:334] "Generic (PLEG): container finished" podID="41bebe25-46fb-4c06-9977-e39a32407c42" containerID="4d4af3a94d447111c7c6c7fc45bdc5fddb1660c1ccb8ad22497fe272ff4714a3" exitCode=2 Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.120310 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerDied","Data":"4d4af3a94d447111c7c6c7fc45bdc5fddb1660c1ccb8ad22497fe272ff4714a3"} Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.125686 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerStarted","Data":"1566ead345517904f08e4a7059485501854eccabce0d6b34c358437483ab83f0"} Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.133625 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" event={"ID":"08dd6673-2fbb-4bb0-ab7b-5b441d18684d","Type":"ContainerStarted","Data":"f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820"} Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.143791 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d4db4449-vlmh7" event={"ID":"cf40a9b0-1e7e-43c9-afa9-571170cc8285","Type":"ContainerStarted","Data":"b30677731512cbcc261dc72482dd3d458bf6a215cea5666e94409bfcb953225d"} Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.144871 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerStarted","Data":"508c45d4cdb4691ab33794ebaab5148972cce5347cc27b933dce3e2f8fdf6540"} Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.149223 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fcf9448b-45l75" Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.199283 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:01:19 crc kubenswrapper[4896]: I0126 16:01:19.227120 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64fcf9448b-45l75"] Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.231361 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerStarted","Data":"888feea7b66b78554c3bec85e17ccb1ef5b34e56fd003197d92558c89f4c3ec3"} Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.272731 4896 generic.go:334] "Generic (PLEG): container finished" podID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerID="9e0b0bf388e3869092f6535c2e906143f41dd31e5b5c5d03222d8c3f0ae4654e" exitCode=0 Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.272847 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" event={"ID":"08dd6673-2fbb-4bb0-ab7b-5b441d18684d","Type":"ContainerDied","Data":"9e0b0bf388e3869092f6535c2e906143f41dd31e5b5c5d03222d8c3f0ae4654e"} Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.276282 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d4db4449-vlmh7" event={"ID":"cf40a9b0-1e7e-43c9-afa9-571170cc8285","Type":"ContainerStarted","Data":"2d099e728aed07b1a322347cba0a730328f4216fda2750a5685170a5d138100a"} Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.276316 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54d4db4449-vlmh7" event={"ID":"cf40a9b0-1e7e-43c9-afa9-571170cc8285","Type":"ContainerStarted","Data":"5568443d0d23f0c0cf66d3976de1ef0eb39effc2cfb39f8b2684aba33cb65001"} Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.276589 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.382481 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54d4db4449-vlmh7" podStartSLOduration=16.382462167 podStartE2EDuration="16.382462167s" podCreationTimestamp="2026-01-26 16:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:20.355864531 +0000 UTC m=+1638.137744924" watchObservedRunningTime="2026-01-26 16:01:20.382462167 +0000 UTC m=+1638.164342560" Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.784113 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" path="/var/lib/kubelet/pods/89998198-666d-4e9c-9213-8dd8fdcdd0d9/volumes" Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.792232 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" path="/var/lib/kubelet/pods/bc21396d-5abd-42fb-b33c-5099769ea73f/volumes" Jan 26 16:01:20 crc kubenswrapper[4896]: I0126 16:01:20.793324 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" path="/var/lib/kubelet/pods/c356aa29-1407-47fb-80f0-a7f1b4a58919/volumes" Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.306903 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69569b65bc-qdnx9" Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.309491 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerStarted","Data":"f3b6acb35a7782688ba62d8a2815b1e68bcec9d750cf217a7aa2cbb4bc0e7f90"} Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.313131 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerStarted","Data":"5d19857ed92b9b0e8b025b983d4f2307acff2a3fb76ae28611366e1eef231c2b"} Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.313698 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api-log" containerID="cri-o://888feea7b66b78554c3bec85e17ccb1ef5b34e56fd003197d92558c89f4c3ec3" gracePeriod=30 Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.314157 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.314227 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" containerID="cri-o://5d19857ed92b9b0e8b025b983d4f2307acff2a3fb76ae28611366e1eef231c2b" gracePeriod=30 Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.320077 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" event={"ID":"08dd6673-2fbb-4bb0-ab7b-5b441d18684d","Type":"ContainerStarted","Data":"920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729"} Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.320872 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.383198 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" podStartSLOduration=6.383148397 podStartE2EDuration="6.383148397s" podCreationTimestamp="2026-01-26 16:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:21.361262101 +0000 UTC m=+1639.143142514" watchObservedRunningTime="2026-01-26 16:01:21.383148397 +0000 UTC m=+1639.165028790" Jan 26 16:01:21 crc kubenswrapper[4896]: I0126 16:01:21.411308 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.411281202 podStartE2EDuration="5.411281202s" podCreationTimestamp="2026-01-26 16:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:21.38719487 +0000 UTC m=+1639.169075263" watchObservedRunningTime="2026-01-26 16:01:21.411281202 +0000 UTC m=+1639.193161595" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.340592 4896 generic.go:334] "Generic (PLEG): container finished" podID="41bebe25-46fb-4c06-9977-e39a32407c42" containerID="7d22bf68b6d9f2f702185d0edc40b21d55ac24b50a4ed6bd26bac70cf984f96b" exitCode=0 Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.341138 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerDied","Data":"7d22bf68b6d9f2f702185d0edc40b21d55ac24b50a4ed6bd26bac70cf984f96b"} Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.345817 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerStarted","Data":"8440e67ab95867c0d7df93ad7f9ec8896697f06c063a76b386b8ee6c2e356319"} Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.355450 4896 generic.go:334] "Generic (PLEG): container finished" podID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerID="888feea7b66b78554c3bec85e17ccb1ef5b34e56fd003197d92558c89f4c3ec3" exitCode=143 Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.356774 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerDied","Data":"888feea7b66b78554c3bec85e17ccb1ef5b34e56fd003197d92558c89f4c3ec3"} Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.371429 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.8973476609999995 podStartE2EDuration="7.371406742s" podCreationTimestamp="2026-01-26 16:01:15 +0000 UTC" firstStartedPulling="2026-01-26 16:01:18.82317939 +0000 UTC m=+1636.605059783" lastFinishedPulling="2026-01-26 16:01:20.297238471 +0000 UTC m=+1638.079118864" observedRunningTime="2026-01-26 16:01:22.368490808 +0000 UTC m=+1640.150371211" watchObservedRunningTime="2026-01-26 16:01:22.371406742 +0000 UTC m=+1640.153287135" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.440666 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441224 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441238 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441258 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441264 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441276 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441282 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441290 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-api" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441296 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-api" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441303 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441308 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441325 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="extract-utilities" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441331 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="extract-utilities" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441341 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="extract-content" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441347 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="extract-content" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441356 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441362 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441373 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="init" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441380 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="init" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441388 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441393 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441400 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="extract-content" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441406 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="extract-content" Jan 26 16:01:22 crc kubenswrapper[4896]: E0126 16:01:22.441421 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="extract-utilities" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441426 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="extract-utilities" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441684 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441701 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441709 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="89998198-666d-4e9c-9213-8dd8fdcdd0d9" containerName="dnsmasq-dns" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441717 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c356aa29-1407-47fb-80f0-a7f1b4a58919" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441722 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4450b82-6d66-4109-8fec-6b979256d032" containerName="registry-server" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441732 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-httpd" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.441740 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbdc1e0-1624-4300-91bb-1bfe556567c6" containerName="neutron-api" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.442678 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.445389 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.445682 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.445853 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zlpb2" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.486652 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.504248 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5kt\" (UniqueName: \"kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.504569 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.504934 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.505015 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.608907 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.609131 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.609188 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.609317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5kt\" (UniqueName: \"kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.610987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.626071 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.626543 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.652349 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5kt\" (UniqueName: \"kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt\") pod \"openstackclient\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.766622 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.842980 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.843083 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64fcf9448b-45l75" podUID="bc21396d-5abd-42fb-b33c-5099769ea73f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.937394 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:22 crc kubenswrapper[4896]: I0126 16:01:22.971638 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.001318 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.002923 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.018540 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.123447 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgk9v\" (UniqueName: \"kubernetes.io/projected/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-kube-api-access-xgk9v\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.123505 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.123736 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.123804 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config-secret\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.225724 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.225813 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config-secret\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.225875 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgk9v\" (UniqueName: \"kubernetes.io/projected/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-kube-api-access-xgk9v\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.225908 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.227511 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.230445 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-openstack-config-secret\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.231449 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.250807 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgk9v\" (UniqueName: \"kubernetes.io/projected/5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3-kube-api-access-xgk9v\") pod \"openstackclient\" (UID: \"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3\") " pod="openstack/openstackclient" Jan 26 16:01:23 crc kubenswrapper[4896]: I0126 16:01:23.333641 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.030205 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.152513 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153198 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153238 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153277 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153355 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c8w9\" (UniqueName: \"kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153394 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.153483 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data\") pod \"41bebe25-46fb-4c06-9977-e39a32407c42\" (UID: \"41bebe25-46fb-4c06-9977-e39a32407c42\") " Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.158022 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.158898 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.175488 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9" (OuterVolumeSpecName: "kube-api-access-5c8w9") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "kube-api-access-5c8w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.212756 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts" (OuterVolumeSpecName: "scripts") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.259209 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.261061 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.261181 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41bebe25-46fb-4c06-9977-e39a32407c42-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.261258 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.261328 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c8w9\" (UniqueName: \"kubernetes.io/projected/41bebe25-46fb-4c06-9977-e39a32407c42-kube-api-access-5c8w9\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.261712 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.266767 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.305616 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data" (OuterVolumeSpecName: "config-data") pod "41bebe25-46fb-4c06-9977-e39a32407c42" (UID: "41bebe25-46fb-4c06-9977-e39a32407c42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.365302 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.365751 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41bebe25-46fb-4c06-9977-e39a32407c42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.399676 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41bebe25-46fb-4c06-9977-e39a32407c42","Type":"ContainerDied","Data":"4a8da1e23df670bf4e038b9aaeae350163374a3f1b32fd7e6f4f50d347118855"} Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.400046 4896 scope.go:117] "RemoveContainer" containerID="4d4af3a94d447111c7c6c7fc45bdc5fddb1660c1ccb8ad22497fe272ff4714a3" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.399753 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.514635 4896 scope.go:117] "RemoveContainer" containerID="7d22bf68b6d9f2f702185d0edc40b21d55ac24b50a4ed6bd26bac70cf984f96b" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.615765 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.632056 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.654138 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:24 crc kubenswrapper[4896]: E0126 16:01:24.654616 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="ceilometer-notification-agent" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.654629 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="ceilometer-notification-agent" Jan 26 16:01:24 crc kubenswrapper[4896]: E0126 16:01:24.654668 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="sg-core" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.654674 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="sg-core" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.654887 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="ceilometer-notification-agent" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.654901 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" containerName="sg-core" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.658964 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.666411 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.672435 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.675813 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.691965 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701156 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701477 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701687 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701768 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqrc\" (UniqueName: \"kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.701918 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.702086 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: E0126 16:01:24.719361 4896 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 16:01:24 crc kubenswrapper[4896]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_29c92273-a6ba-4661-8315-39a8c56e624d_0(062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa" Netns:"/var/run/netns/08dba94f-1d98-4128-a9fb-f488f2cc0cdd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa;K8S_POD_UID=29c92273-a6ba-4661-8315-39a8c56e624d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/29c92273-a6ba-4661-8315-39a8c56e624d]: expected pod UID "29c92273-a6ba-4661-8315-39a8c56e624d" but got "5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" from Kube API Jan 26 16:01:24 crc kubenswrapper[4896]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 16:01:24 crc kubenswrapper[4896]: > Jan 26 16:01:24 crc kubenswrapper[4896]: E0126 16:01:24.719455 4896 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 16:01:24 crc kubenswrapper[4896]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_29c92273-a6ba-4661-8315-39a8c56e624d_0(062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa" Netns:"/var/run/netns/08dba94f-1d98-4128-a9fb-f488f2cc0cdd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=062e57571954ff6c050531a8233a4cdfb6a30a3b41c763d83e8592d71022bbfa;K8S_POD_UID=29c92273-a6ba-4661-8315-39a8c56e624d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/29c92273-a6ba-4661-8315-39a8c56e624d]: expected pod UID "29c92273-a6ba-4661-8315-39a8c56e624d" but got "5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" from Kube API Jan 26 16:01:24 crc kubenswrapper[4896]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 16:01:24 crc kubenswrapper[4896]: > pod="openstack/openstackclient" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854080 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854351 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854377 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854423 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854443 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqrc\" (UniqueName: \"kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.854656 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.857897 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.860789 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.862316 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.862498 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.863899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.863951 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.891809 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41bebe25-46fb-4c06-9977-e39a32407c42" path="/var/lib/kubelet/pods/41bebe25-46fb-4c06-9977-e39a32407c42/volumes" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.901261 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqrc\" (UniqueName: \"kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc\") pod \"ceilometer-0\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " pod="openstack/ceilometer-0" Jan 26 16:01:24 crc kubenswrapper[4896]: I0126 16:01:24.990744 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.427255 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3","Type":"ContainerStarted","Data":"cbb202ea8e4f423793e03fede60e41c1bcdcccf0e25be2ecb0191d8ae1d6a75c"} Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.428688 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.443409 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.446612 4896 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="29c92273-a6ba-4661-8315-39a8c56e624d" podUID="5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.571957 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj5kt\" (UniqueName: \"kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt\") pod \"29c92273-a6ba-4661-8315-39a8c56e624d\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.572308 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret\") pod \"29c92273-a6ba-4661-8315-39a8c56e624d\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.572395 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle\") pod \"29c92273-a6ba-4661-8315-39a8c56e624d\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.572457 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config\") pod \"29c92273-a6ba-4661-8315-39a8c56e624d\" (UID: \"29c92273-a6ba-4661-8315-39a8c56e624d\") " Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.574606 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "29c92273-a6ba-4661-8315-39a8c56e624d" (UID: "29c92273-a6ba-4661-8315-39a8c56e624d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.578444 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt" (OuterVolumeSpecName: "kube-api-access-tj5kt") pod "29c92273-a6ba-4661-8315-39a8c56e624d" (UID: "29c92273-a6ba-4661-8315-39a8c56e624d"). InnerVolumeSpecName "kube-api-access-tj5kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.582177 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "29c92273-a6ba-4661-8315-39a8c56e624d" (UID: "29c92273-a6ba-4661-8315-39a8c56e624d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.584996 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c92273-a6ba-4661-8315-39a8c56e624d" (UID: "29c92273-a6ba-4661-8315-39a8c56e624d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.694081 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.694138 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c92273-a6ba-4661-8315-39a8c56e624d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.694152 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29c92273-a6ba-4661-8315-39a8c56e624d-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.694163 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj5kt\" (UniqueName: \"kubernetes.io/projected/29c92273-a6ba-4661-8315-39a8c56e624d-kube-api-access-tj5kt\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:25 crc kubenswrapper[4896]: I0126 16:01:25.719780 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:25 crc kubenswrapper[4896]: W0126 16:01:25.748461 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b00d7b2_be29_49a2_8d7f_57511bac6549.slice/crio-9817c999d9a298536bf2e3bafae769206b16e77bf5cefe8e415e96f728743cf9 WatchSource:0}: Error finding container 9817c999d9a298536bf2e3bafae769206b16e77bf5cefe8e415e96f728743cf9: Status 404 returned error can't find the container with id 9817c999d9a298536bf2e3bafae769206b16e77bf5cefe8e415e96f728743cf9 Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.111806 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.231794 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.308369 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.308881 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-57slp" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="dnsmasq-dns" containerID="cri-o://9f3b8a8f89855b5c7a729e508562463e915cdbd5696bdf369122371b357cfa43" gracePeriod=10 Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.493090 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.511471 4896 generic.go:334] "Generic (PLEG): container finished" podID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerID="9f3b8a8f89855b5c7a729e508562463e915cdbd5696bdf369122371b357cfa43" exitCode=0 Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.511613 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerDied","Data":"9f3b8a8f89855b5c7a729e508562463e915cdbd5696bdf369122371b357cfa43"} Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.519375 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.519675 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerStarted","Data":"9817c999d9a298536bf2e3bafae769206b16e77bf5cefe8e415e96f728743cf9"} Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.650845 4896 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="29c92273-a6ba-4661-8315-39a8c56e624d" podUID="5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" Jan 26 16:01:26 crc kubenswrapper[4896]: I0126 16:01:26.783443 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c92273-a6ba-4661-8315-39a8c56e624d" path="/var/lib/kubelet/pods/29c92273-a6ba-4661-8315-39a8c56e624d/volumes" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.345718 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.456291 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.456690 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqwp\" (UniqueName: \"kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.457697 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.457849 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.458012 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.458175 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0\") pod \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\" (UID: \"7992c2f6-c973-4b0e-a0a5-6035c715dc72\") " Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.486469 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp" (OuterVolumeSpecName: "kube-api-access-9dqwp") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "kube-api-access-9dqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.594787 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqwp\" (UniqueName: \"kubernetes.io/projected/7992c2f6-c973-4b0e-a0a5-6035c715dc72-kube-api-access-9dqwp\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.607146 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.635119 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerStarted","Data":"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa"} Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.667393 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.674863 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-57slp" event={"ID":"7992c2f6-c973-4b0e-a0a5-6035c715dc72","Type":"ContainerDied","Data":"036b14615f84e5f40755b8673bb371753c39aa4af51db85a7d9c42082c49db57"} Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.675091 4896 scope.go:117] "RemoveContainer" containerID="9f3b8a8f89855b5c7a729e508562463e915cdbd5696bdf369122371b357cfa43" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.675372 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-57slp" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.698413 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.698440 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.712603 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.733900 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.754522 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config" (OuterVolumeSpecName: "config") pod "7992c2f6-c973-4b0e-a0a5-6035c715dc72" (UID: "7992c2f6-c973-4b0e-a0a5-6035c715dc72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.756335 4896 scope.go:117] "RemoveContainer" containerID="a0d98e3a654eadf33f85f1377f1b38965ab02e59b9066d7459ce725645bde209" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.801422 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.801464 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:27 crc kubenswrapper[4896]: I0126 16:01:27.801478 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7992c2f6-c973-4b0e-a0a5-6035c715dc72-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:28 crc kubenswrapper[4896]: I0126 16:01:28.045772 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:01:28 crc kubenswrapper[4896]: I0126 16:01:28.052895 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-57slp"] Jan 26 16:01:28 crc kubenswrapper[4896]: I0126 16:01:28.716373 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerStarted","Data":"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49"} Jan 26 16:01:28 crc kubenswrapper[4896]: I0126 16:01:28.808334 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" path="/var/lib/kubelet/pods/7992c2f6-c973-4b0e-a0a5-6035c715dc72/volumes" Jan 26 16:01:29 crc kubenswrapper[4896]: I0126 16:01:29.761110 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerStarted","Data":"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d"} Jan 26 16:01:30 crc kubenswrapper[4896]: I0126 16:01:30.801134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerStarted","Data":"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844"} Jan 26 16:01:30 crc kubenswrapper[4896]: I0126 16:01:30.805097 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:01:30 crc kubenswrapper[4896]: I0126 16:01:30.811914 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 16:01:30 crc kubenswrapper[4896]: I0126 16:01:30.850798 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216414436 podStartE2EDuration="6.85077103s" podCreationTimestamp="2026-01-26 16:01:24 +0000 UTC" firstStartedPulling="2026-01-26 16:01:25.752599259 +0000 UTC m=+1643.534479652" lastFinishedPulling="2026-01-26 16:01:30.386955853 +0000 UTC m=+1648.168836246" observedRunningTime="2026-01-26 16:01:30.834941458 +0000 UTC m=+1648.616821851" watchObservedRunningTime="2026-01-26 16:01:30.85077103 +0000 UTC m=+1648.632651423" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.127435 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.211828 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.326936 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-686bd9bf85-wbdcn"] Jan 26 16:01:31 crc kubenswrapper[4896]: E0126 16:01:31.327672 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="dnsmasq-dns" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.327696 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="dnsmasq-dns" Jan 26 16:01:31 crc kubenswrapper[4896]: E0126 16:01:31.327740 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="init" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.327749 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="init" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.328192 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="7992c2f6-c973-4b0e-a0a5-6035c715dc72" containerName="dnsmasq-dns" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.330159 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.337298 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.337444 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.338171 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.338944 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-686bd9bf85-wbdcn"] Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.438896 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-log-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439301 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-config-data\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439397 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-combined-ca-bundle\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439458 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-internal-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439568 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-etc-swift\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439655 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-run-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439703 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzvx\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-kube-api-access-mdzvx\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.439819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-public-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.542213 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-internal-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.542324 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-etc-swift\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.542363 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-run-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.542961 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-run-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.543407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzvx\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-kube-api-access-mdzvx\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.543742 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-public-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.544194 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-log-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.544251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-config-data\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.544326 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-combined-ca-bundle\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.547156 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-log-httpd\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.551843 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-etc-swift\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.553693 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-combined-ca-bundle\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.555268 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-config-data\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.561281 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-public-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.563324 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzvx\" (UniqueName: \"kubernetes.io/projected/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-kube-api-access-mdzvx\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.564112 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad-internal-tls-certs\") pod \"swift-proxy-686bd9bf85-wbdcn\" (UID: \"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad\") " pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.657279 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.813198 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="cinder-scheduler" containerID="cri-o://f3b6acb35a7782688ba62d8a2815b1e68bcec9d750cf217a7aa2cbb4bc0e7f90" gracePeriod=30 Jan 26 16:01:31 crc kubenswrapper[4896]: I0126 16:01:31.814526 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="probe" containerID="cri-o://8440e67ab95867c0d7df93ad7f9ec8896697f06c063a76b386b8ee6c2e356319" gracePeriod=30 Jan 26 16:01:32 crc kubenswrapper[4896]: I0126 16:01:32.406750 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-686bd9bf85-wbdcn"] Jan 26 16:01:32 crc kubenswrapper[4896]: I0126 16:01:32.834089 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-686bd9bf85-wbdcn" event={"ID":"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad","Type":"ContainerStarted","Data":"e078392a60ef56fdd1b7a2420684c0f2b5c590847bd3f9ecd9d040702e63c915"} Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.875657 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-686bd9bf85-wbdcn" event={"ID":"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad","Type":"ContainerStarted","Data":"a7eb08963b5d46faf103bd3d30f3a12d55df434b0b766038308e540a3dbb3cce"} Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.875924 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-686bd9bf85-wbdcn" event={"ID":"c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad","Type":"ContainerStarted","Data":"7b769e486dcd40c4227016d967cb04ece67f39dfab919614eba3dd96a254471e"} Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.876491 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.888389 4896 generic.go:334] "Generic (PLEG): container finished" podID="49801199-d283-4061-bc35-6f1be4984b64" containerID="8440e67ab95867c0d7df93ad7f9ec8896697f06c063a76b386b8ee6c2e356319" exitCode=0 Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.888713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerDied","Data":"8440e67ab95867c0d7df93ad7f9ec8896697f06c063a76b386b8ee6c2e356319"} Jan 26 16:01:33 crc kubenswrapper[4896]: I0126 16:01:33.922199 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-686bd9bf85-wbdcn" podStartSLOduration=2.922177525 podStartE2EDuration="2.922177525s" podCreationTimestamp="2026-01-26 16:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:33.906394954 +0000 UTC m=+1651.688275367" watchObservedRunningTime="2026-01-26 16:01:33.922177525 +0000 UTC m=+1651.704057918" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.084332 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.090789 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.100643 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-m6pjw" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.100839 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.101057 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.145417 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.181213 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslqv\" (UniqueName: \"kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.181314 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.181522 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.181671 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.183524 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.185870 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.213440 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.319864 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.360738 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.361966 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362035 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362082 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362105 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362409 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362489 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6rpk\" (UniqueName: \"kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362515 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362547 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362626 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.362879 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslqv\" (UniqueName: \"kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.364795 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.370412 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.384801 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.400942 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.410843 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.423342 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslqv\" (UniqueName: \"kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv\") pod \"heat-engine-6f79bf9b96-md4vg\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.426663 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.428756 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.439056 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.448728 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466168 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466246 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6rpk\" (UniqueName: \"kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466278 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466309 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466354 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6tr\" (UniqueName: \"kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.466396 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.468702 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.468872 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.468977 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.469021 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.475062 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.475113 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.475778 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.476387 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.477359 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.491277 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.535614 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6rpk\" (UniqueName: \"kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk\") pod \"dnsmasq-dns-7756b9d78c-ctvwg\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.539850 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.571533 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573330 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573462 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573564 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573648 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573675 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573723 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtdv\" (UniqueName: \"kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.573784 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6tr\" (UniqueName: \"kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.578781 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.581934 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.604728 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.614133 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6tr\" (UniqueName: \"kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr\") pod \"heat-cfnapi-545f7c69fd-hm6nm\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.678841 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.679329 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.679795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.679947 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtdv\" (UniqueName: \"kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.689899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.700119 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.708602 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtdv\" (UniqueName: \"kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.756715 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data\") pod \"heat-api-75c68767d8-c7w2z\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.900466 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.924210 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:34 crc kubenswrapper[4896]: I0126 16:01:34.961724 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:35 crc kubenswrapper[4896]: I0126 16:01:35.143038 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54d4db4449-vlmh7" Jan 26 16:01:35 crc kubenswrapper[4896]: I0126 16:01:35.237723 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:01:35 crc kubenswrapper[4896]: I0126 16:01:35.238008 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7748db89d4-5m4rm" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-api" containerID="cri-o://13038f4b1fc861b0fb52f7a8a7fbca95f80bc8c3a2579a9f7d180e5953a06057" gracePeriod=30 Jan 26 16:01:35 crc kubenswrapper[4896]: I0126 16:01:35.238383 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7748db89d4-5m4rm" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" containerID="cri-o://98f0b276d69dd247dadc68513e3b0a32c265aaba50665acfcc9ddd1e793452fd" gracePeriod=30 Jan 26 16:01:35 crc kubenswrapper[4896]: I0126 16:01:35.595600 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:01:36 crc kubenswrapper[4896]: I0126 16:01:36.158608 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:01:36 crc kubenswrapper[4896]: I0126 16:01:36.367083 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:01:36 crc kubenswrapper[4896]: W0126 16:01:36.455409 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3378c1f_3999_417b_8b94_ba779f8b48c3.slice/crio-d45dcc9c9ddc2d4ac40b674c531e3c62689d159cb2c0b73ebc83ec2bf8644e54 WatchSource:0}: Error finding container d45dcc9c9ddc2d4ac40b674c531e3c62689d159cb2c0b73ebc83ec2bf8644e54: Status 404 returned error can't find the container with id d45dcc9c9ddc2d4ac40b674c531e3c62689d159cb2c0b73ebc83ec2bf8644e54 Jan 26 16:01:36 crc kubenswrapper[4896]: I0126 16:01:36.490084 4896 generic.go:334] "Generic (PLEG): container finished" podID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerID="98f0b276d69dd247dadc68513e3b0a32c265aaba50665acfcc9ddd1e793452fd" exitCode=0 Jan 26 16:01:36 crc kubenswrapper[4896]: I0126 16:01:36.490612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerDied","Data":"98f0b276d69dd247dadc68513e3b0a32c265aaba50665acfcc9ddd1e793452fd"} Jan 26 16:01:36 crc kubenswrapper[4896]: I0126 16:01:36.509920 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f79bf9b96-md4vg" event={"ID":"9be68e33-e343-492f-913b-098163a26f87","Type":"ContainerStarted","Data":"6917f295999df01b05816faceb29cc83eb550aeb2e03abb93db349eb1eaf31bb"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.075484 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.075893 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-central-agent" containerID="cri-o://485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" gracePeriod=30 Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.076062 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="proxy-httpd" containerID="cri-o://07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" gracePeriod=30 Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.076131 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="sg-core" containerID="cri-o://995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" gracePeriod=30 Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.076183 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-notification-agent" containerID="cri-o://b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" gracePeriod=30 Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.126675 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.839261 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75c68767d8-c7w2z" event={"ID":"38c3367b-9b2c-40a4-841d-7b815bbfd45a","Type":"ContainerStarted","Data":"663245645bec25efbc5c91a8acbfefee4b0984c7d1018da121e92d8472680457"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.919355 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" event={"ID":"c3378c1f-3999-417b-8b94-ba779f8b48c3","Type":"ContainerStarted","Data":"d45dcc9c9ddc2d4ac40b674c531e3c62689d159cb2c0b73ebc83ec2bf8644e54"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.923935 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" event={"ID":"1b2e158c-acf0-4642-b31b-4db17087c69c","Type":"ContainerStarted","Data":"6456233a6aec9f0ae195c0d47e3198982a6840eebd5ad6900c5b532536add0c4"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.936431 4896 generic.go:334] "Generic (PLEG): container finished" podID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" exitCode=2 Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.936504 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerDied","Data":"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.959972 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f79bf9b96-md4vg" event={"ID":"9be68e33-e343-492f-913b-098163a26f87","Type":"ContainerStarted","Data":"66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe"} Jan 26 16:01:37 crc kubenswrapper[4896]: I0126 16:01:37.961530 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.005370 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6f79bf9b96-md4vg" podStartSLOduration=4.005347411 podStartE2EDuration="4.005347411s" podCreationTimestamp="2026-01-26 16:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:37.982949082 +0000 UTC m=+1655.764829475" watchObservedRunningTime="2026-01-26 16:01:38.005347411 +0000 UTC m=+1655.787227804" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.604905 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.721507 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.721953 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722135 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cqrc\" (UniqueName: \"kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722189 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722123 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722245 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722327 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722412 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd\") pod \"6b00d7b2-be29-49a2-8d7f-57511bac6549\" (UID: \"6b00d7b2-be29-49a2-8d7f-57511bac6549\") " Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.722872 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.723636 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.723660 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b00d7b2-be29-49a2-8d7f-57511bac6549-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.739897 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts" (OuterVolumeSpecName: "scripts") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.741813 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc" (OuterVolumeSpecName: "kube-api-access-4cqrc") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "kube-api-access-4cqrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.832177 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cqrc\" (UniqueName: \"kubernetes.io/projected/6b00d7b2-be29-49a2-8d7f-57511bac6549-kube-api-access-4cqrc\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.840885 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.870684 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.947067 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:38 crc kubenswrapper[4896]: I0126 16:01:38.972077 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.003265 4896 generic.go:334] "Generic (PLEG): container finished" podID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerID="90d02fa8d12a783c6d01be119835f0f6fca38a31787460613fa4e9cc898d600e" exitCode=0 Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.027921 4896 generic.go:334] "Generic (PLEG): container finished" podID="49801199-d283-4061-bc35-6f1be4984b64" containerID="f3b6acb35a7782688ba62d8a2815b1e68bcec9d750cf217a7aa2cbb4bc0e7f90" exitCode=0 Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.049515 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.049825 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data" (OuterVolumeSpecName: "config-data") pod "6b00d7b2-be29-49a2-8d7f-57511bac6549" (UID: "6b00d7b2-be29-49a2-8d7f-57511bac6549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.064280 4896 generic.go:334] "Generic (PLEG): container finished" podID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" exitCode=0 Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.064319 4896 generic.go:334] "Generic (PLEG): container finished" podID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" exitCode=0 Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.064329 4896 generic.go:334] "Generic (PLEG): container finished" podID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" exitCode=0 Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.064494 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111203 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" event={"ID":"c3378c1f-3999-417b-8b94-ba779f8b48c3","Type":"ContainerDied","Data":"90d02fa8d12a783c6d01be119835f0f6fca38a31787460613fa4e9cc898d600e"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111543 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerDied","Data":"f3b6acb35a7782688ba62d8a2815b1e68bcec9d750cf217a7aa2cbb4bc0e7f90"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111569 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49801199-d283-4061-bc35-6f1be4984b64","Type":"ContainerDied","Data":"1566ead345517904f08e4a7059485501854eccabce0d6b34c358437483ab83f0"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111602 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1566ead345517904f08e4a7059485501854eccabce0d6b34c358437483ab83f0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111615 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerDied","Data":"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111635 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerDied","Data":"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111647 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerDied","Data":"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111657 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b00d7b2-be29-49a2-8d7f-57511bac6549","Type":"ContainerDied","Data":"9817c999d9a298536bf2e3bafae769206b16e77bf5cefe8e415e96f728743cf9"} Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.111681 4896 scope.go:117] "RemoveContainer" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.144455 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.152746 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b00d7b2-be29-49a2-8d7f-57511bac6549-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.166772 4896 scope.go:117] "RemoveContainer" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.192264 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.213629 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.220479 4896 scope.go:117] "RemoveContainer" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.253131 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.253928 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="proxy-httpd" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.253947 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="proxy-httpd" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.253968 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="cinder-scheduler" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.253976 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="cinder-scheduler" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.253990 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="probe" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.253999 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="probe" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.254019 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-notification-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254026 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-notification-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.254039 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-central-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254046 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-central-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.254107 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="sg-core" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254116 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="sg-core" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254389 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-notification-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254417 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="ceilometer-central-agent" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254446 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="probe" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254462 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="sg-core" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254489 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="49801199-d283-4061-bc35-6f1be4984b64" containerName="cinder-scheduler" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254502 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" containerName="proxy-httpd" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254822 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254890 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.254991 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.255086 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62qdg\" (UniqueName: \"kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.255170 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.255268 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data\") pod \"49801199-d283-4061-bc35-6f1be4984b64\" (UID: \"49801199-d283-4061-bc35-6f1be4984b64\") " Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.255456 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.256130 4896 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49801199-d283-4061-bc35-6f1be4984b64-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.257429 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.262069 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.262475 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.286055 4896 scope.go:117] "RemoveContainer" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.286623 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.529812 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.529913 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530092 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530123 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530143 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530272 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530325 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkz4\" (UniqueName: \"kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.530431 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.560567 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts" (OuterVolumeSpecName: "scripts") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.560779 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg" (OuterVolumeSpecName: "kube-api-access-62qdg") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "kube-api-access-62qdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.629853 4896 scope.go:117] "RemoveContainer" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.634719 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": container with ID starting with 07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844 not found: ID does not exist" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.635019 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844"} err="failed to get container status \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": rpc error: code = NotFound desc = could not find container \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": container with ID starting with 07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.635467 4896 scope.go:117] "RemoveContainer" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.637568 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": container with ID starting with 995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d not found: ID does not exist" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.637699 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d"} err="failed to get container status \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": rpc error: code = NotFound desc = could not find container \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": container with ID starting with 995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.639496 4896 scope.go:117] "RemoveContainer" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.640562 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": container with ID starting with b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49 not found: ID does not exist" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.640605 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49"} err="failed to get container status \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": rpc error: code = NotFound desc = could not find container \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": container with ID starting with b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.640636 4896 scope.go:117] "RemoveContainer" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657219 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657333 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657482 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657518 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657551 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657712 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.657795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkz4\" (UniqueName: \"kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.658152 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.658184 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62qdg\" (UniqueName: \"kubernetes.io/projected/49801199-d283-4061-bc35-6f1be4984b64-kube-api-access-62qdg\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:39 crc kubenswrapper[4896]: E0126 16:01:39.659105 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": container with ID starting with 485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa not found: ID does not exist" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.659154 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa"} err="failed to get container status \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": rpc error: code = NotFound desc = could not find container \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": container with ID starting with 485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.659184 4896 scope.go:117] "RemoveContainer" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.659190 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.665192 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844"} err="failed to get container status \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": rpc error: code = NotFound desc = could not find container \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": container with ID starting with 07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.665269 4896 scope.go:117] "RemoveContainer" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.671025 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.672532 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d"} err="failed to get container status \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": rpc error: code = NotFound desc = could not find container \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": container with ID starting with 995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.672573 4896 scope.go:117] "RemoveContainer" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.676912 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.677191 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49"} err="failed to get container status \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": rpc error: code = NotFound desc = could not find container \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": container with ID starting with b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.677241 4896 scope.go:117] "RemoveContainer" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.677461 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.683821 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa"} err="failed to get container status \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": rpc error: code = NotFound desc = could not find container \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": container with ID starting with 485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.683874 4896 scope.go:117] "RemoveContainer" containerID="07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.688618 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.697327 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844"} err="failed to get container status \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": rpc error: code = NotFound desc = could not find container \"07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844\": container with ID starting with 07f9f764fd990d60ea21df920d7812fcc38581d863721ab76ded81ba8d26f844 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.697380 4896 scope.go:117] "RemoveContainer" containerID="995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.698428 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d"} err="failed to get container status \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": rpc error: code = NotFound desc = could not find container \"995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d\": container with ID starting with 995cca58c8613959666dd8758e56e96ca669ad767aa9c8d0ae22d01ef26e737d not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.698445 4896 scope.go:117] "RemoveContainer" containerID="b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.698734 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49"} err="failed to get container status \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": rpc error: code = NotFound desc = could not find container \"b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49\": container with ID starting with b05d0c546a052d5e13b47eb07108a437fcf8c3fee17d385e693b889ba9703a49 not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.698749 4896 scope.go:117] "RemoveContainer" containerID="485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.703433 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa"} err="failed to get container status \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": rpc error: code = NotFound desc = could not find container \"485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa\": container with ID starting with 485c0c4d9bcd5a65cc308af61f59b4bf181d1fb9aa2d4a63ff162ab39120b4aa not found: ID does not exist" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.719411 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkz4\" (UniqueName: \"kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.729146 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.820037 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data\") pod \"ceilometer-0\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " pod="openstack/ceilometer-0" Jan 26 16:01:39 crc kubenswrapper[4896]: I0126 16:01:39.950197 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.042735 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.218220 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.313471 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.365426 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data" (OuterVolumeSpecName: "config-data") pod "49801199-d283-4061-bc35-6f1be4984b64" (UID: "49801199-d283-4061-bc35-6f1be4984b64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.459887 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49801199-d283-4061-bc35-6f1be4984b64-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.677124 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.685213 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.707366 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.713382 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.718392 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.734210 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.766996 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfgn\" (UniqueName: \"kubernetes.io/projected/d911d99d-a84d-4aa0-ae95-8a840d2822ce-kube-api-access-zgfgn\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.767073 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.767151 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.767195 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.767317 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.767418 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d911d99d-a84d-4aa0-ae95-8a840d2822ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.787774 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49801199-d283-4061-bc35-6f1be4984b64" path="/var/lib/kubelet/pods/49801199-d283-4061-bc35-6f1be4984b64/volumes" Jan 26 16:01:40 crc kubenswrapper[4896]: I0126 16:01:40.788704 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b00d7b2-be29-49a2-8d7f-57511bac6549" path="/var/lib/kubelet/pods/6b00d7b2-be29-49a2-8d7f-57511bac6549/volumes" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.074061 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.076938 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d911d99d-a84d-4aa0-ae95-8a840d2822ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077091 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d911d99d-a84d-4aa0-ae95-8a840d2822ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077160 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfgn\" (UniqueName: \"kubernetes.io/projected/d911d99d-a84d-4aa0-ae95-8a840d2822ce-kube-api-access-zgfgn\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077289 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077449 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077531 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.077754 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.094427 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.098922 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.099251 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.100786 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d911d99d-a84d-4aa0-ae95-8a840d2822ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.113433 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfgn\" (UniqueName: \"kubernetes.io/projected/d911d99d-a84d-4aa0-ae95-8a840d2822ce-kube-api-access-zgfgn\") pod \"cinder-scheduler-0\" (UID: \"d911d99d-a84d-4aa0-ae95-8a840d2822ce\") " pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.143519 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.330399 4896 generic.go:334] "Generic (PLEG): container finished" podID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerID="13038f4b1fc861b0fb52f7a8a7fbca95f80bc8c3a2579a9f7d180e5953a06057" exitCode=0 Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.330771 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerDied","Data":"13038f4b1fc861b0fb52f7a8a7fbca95f80bc8c3a2579a9f7d180e5953a06057"} Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.337685 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" event={"ID":"c3378c1f-3999-417b-8b94-ba779f8b48c3","Type":"ContainerStarted","Data":"38bfd9c495695abf12c7d410784c262d39598c2300520f6a914feb0b6d339899"} Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.339137 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.372370 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" podStartSLOduration=7.372348448 podStartE2EDuration="7.372348448s" podCreationTimestamp="2026-01-26 16:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:41.357747146 +0000 UTC m=+1659.139627539" watchObservedRunningTime="2026-01-26 16:01:41.372348448 +0000 UTC m=+1659.154228841" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.527657 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.529766 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.550230 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.213:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.586954 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.609193 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.609333 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgnf\" (UniqueName: \"kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.609358 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.609483 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.614786 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.616419 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.650104 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.680704 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.682213 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.682764 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-686bd9bf85-wbdcn" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.682863 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.715662 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.938485 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.938806 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspnm\" (UniqueName: \"kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.938880 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939002 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939064 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939082 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939156 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939186 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939305 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939374 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98kh\" (UniqueName: \"kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939396 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgnf\" (UniqueName: \"kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.939413 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.946992 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.955660 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.967173 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:41 crc kubenswrapper[4896]: I0126 16:01:41.972603 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgnf\" (UniqueName: \"kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf\") pod \"heat-engine-665cc7757b-8rh2l\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046273 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hspnm\" (UniqueName: \"kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046383 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046505 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046546 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046656 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.046785 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.048632 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l98kh\" (UniqueName: \"kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.048734 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.056002 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.059117 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.062369 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.062369 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.068178 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.070486 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hspnm\" (UniqueName: \"kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.075550 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom\") pod \"heat-api-57f84f676c-wb8k9\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.076344 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l98kh\" (UniqueName: \"kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh\") pod \"heat-cfnapi-5597f886c8-sbvdp\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.179993 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.251766 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:42 crc kubenswrapper[4896]: I0126 16:01:42.307450 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.622453 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerStarted","Data":"9c4357e08cecb6f596c2f68ed567b7e0ac7909a578b508c65ddd95e1fa9ccd9d"} Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.631247 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.634195 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7748db89d4-5m4rm" event={"ID":"863656bc-e25d-45c3-9e3a-101cbcdcac9d","Type":"ContainerDied","Data":"e8c4460bcbab41706d5fbe10ba6a5409ce8a4625c46ca752ae53ba7b1368add0"} Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.634270 4896 scope.go:117] "RemoveContainer" containerID="98f0b276d69dd247dadc68513e3b0a32c265aaba50665acfcc9ddd1e793452fd" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.679606 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config\") pod \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.679762 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs\") pod \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.679851 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle\") pod \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.680259 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wczr6\" (UniqueName: \"kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6\") pod \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.680408 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config\") pod \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\" (UID: \"863656bc-e25d-45c3-9e3a-101cbcdcac9d\") " Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.696003 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "863656bc-e25d-45c3-9e3a-101cbcdcac9d" (UID: "863656bc-e25d-45c3-9e3a-101cbcdcac9d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.697365 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6" (OuterVolumeSpecName: "kube-api-access-wczr6") pod "863656bc-e25d-45c3-9e3a-101cbcdcac9d" (UID: "863656bc-e25d-45c3-9e3a-101cbcdcac9d"). InnerVolumeSpecName "kube-api-access-wczr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.783245 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.784222 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wczr6\" (UniqueName: \"kubernetes.io/projected/863656bc-e25d-45c3-9e3a-101cbcdcac9d-kube-api-access-wczr6\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.792302 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "863656bc-e25d-45c3-9e3a-101cbcdcac9d" (UID: "863656bc-e25d-45c3-9e3a-101cbcdcac9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.833966 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "863656bc-e25d-45c3-9e3a-101cbcdcac9d" (UID: "863656bc-e25d-45c3-9e3a-101cbcdcac9d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.842221 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config" (OuterVolumeSpecName: "config") pod "863656bc-e25d-45c3-9e3a-101cbcdcac9d" (UID: "863656bc-e25d-45c3-9e3a-101cbcdcac9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.886079 4896 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.886109 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:44 crc kubenswrapper[4896]: I0126 16:01:44.886120 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/863656bc-e25d-45c3-9e3a-101cbcdcac9d-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:45 crc kubenswrapper[4896]: I0126 16:01:45.656410 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7748db89d4-5m4rm" Jan 26 16:01:45 crc kubenswrapper[4896]: I0126 16:01:45.717278 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:01:45 crc kubenswrapper[4896]: I0126 16:01:45.737696 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7748db89d4-5m4rm"] Jan 26 16:01:45 crc kubenswrapper[4896]: I0126 16:01:45.856721 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:01:45 crc kubenswrapper[4896]: I0126 16:01:45.904281 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.042001 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:01:46 crc kubenswrapper[4896]: E0126 16:01:46.061023 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.061077 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" Jan 26 16:01:46 crc kubenswrapper[4896]: E0126 16:01:46.061109 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-api" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.061117 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-api" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.061544 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-api" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.061567 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" containerName="neutron-httpd" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.062813 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.073772 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.081699 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.082839 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.083469 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.105923 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.106858 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.120450 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.381727 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.390847 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.393784 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.393894 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394013 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394058 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394144 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394310 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394434 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394497 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394613 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394713 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtd9\" (UniqueName: \"kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.394831 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fgml\" (UniqueName: \"kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500412 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500491 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500527 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500562 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500616 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqtd9\" (UniqueName: \"kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500657 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fgml\" (UniqueName: \"kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500711 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500816 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500832 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500865 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500883 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.500916 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.508035 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.521596 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.522847 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.524483 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.526589 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.535954 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqtd9\" (UniqueName: \"kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.536225 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.536571 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.537622 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.539154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fgml\" (UniqueName: \"kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.542942 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle\") pod \"heat-api-7756b77457-pgv8j\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.562573 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom\") pod \"heat-cfnapi-6f77b8c468-mpb4b\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.595698 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.213:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.708045 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.749330 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:01:46 crc kubenswrapper[4896]: I0126 16:01:46.775553 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863656bc-e25d-45c3-9e3a-101cbcdcac9d" path="/var/lib/kubelet/pods/863656bc-e25d-45c3-9e3a-101cbcdcac9d/volumes" Jan 26 16:01:48 crc kubenswrapper[4896]: I0126 16:01:48.813914 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:01:48 crc kubenswrapper[4896]: I0126 16:01:48.814233 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:01:49 crc kubenswrapper[4896]: I0126 16:01:49.543432 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:01:49 crc kubenswrapper[4896]: I0126 16:01:49.758849 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:49 crc kubenswrapper[4896]: I0126 16:01:49.760178 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" containerID="cri-o://920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729" gracePeriod=10 Jan 26 16:01:50 crc kubenswrapper[4896]: E0126 16:01:50.651387 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08dd6673_2fbb_4bb0_ab7b_5b441d18684d.slice/crio-920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08dd6673_2fbb_4bb0_ab7b_5b441d18684d.slice/crio-conmon-920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:01:50 crc kubenswrapper[4896]: I0126 16:01:50.889959 4896 generic.go:334] "Generic (PLEG): container finished" podID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerID="920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729" exitCode=0 Jan 26 16:01:50 crc kubenswrapper[4896]: I0126 16:01:50.890028 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" event={"ID":"08dd6673-2fbb-4bb0-ab7b-5b441d18684d","Type":"ContainerDied","Data":"920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729"} Jan 26 16:01:51 crc kubenswrapper[4896]: I0126 16:01:51.430018 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: connect: connection refused" Jan 26 16:01:51 crc kubenswrapper[4896]: I0126 16:01:51.503109 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.213:8776/healthcheck\": dial tcp 10.217.0.213:8776: connect: connection refused" Jan 26 16:01:51 crc kubenswrapper[4896]: I0126 16:01:51.908377 4896 generic.go:334] "Generic (PLEG): container finished" podID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerID="5d19857ed92b9b0e8b025b983d4f2307acff2a3fb76ae28611366e1eef231c2b" exitCode=137 Jan 26 16:01:51 crc kubenswrapper[4896]: I0126 16:01:51.908431 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerDied","Data":"5d19857ed92b9b0e8b025b983d4f2307acff2a3fb76ae28611366e1eef231c2b"} Jan 26 16:01:53 crc kubenswrapper[4896]: I0126 16:01:53.986410 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:01:54 crc kubenswrapper[4896]: I0126 16:01:54.631251 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:01:55 crc kubenswrapper[4896]: I0126 16:01:55.046563 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:01:55 crc kubenswrapper[4896]: I0126 16:01:55.046838 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-log" containerID="cri-o://7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2" gracePeriod=30 Jan 26 16:01:55 crc kubenswrapper[4896]: I0126 16:01:55.047308 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-httpd" containerID="cri-o://f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9" gracePeriod=30 Jan 26 16:01:55 crc kubenswrapper[4896]: E0126 16:01:55.857471 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 26 16:01:55 crc kubenswrapper[4896]: E0126 16:01:55.858004 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchdh665hb4h5ddh668h5f7h654h599h5d6h597h667h684hd9h648h95h577h55dh665h94h6ch644h586hc4h5bdh9bh5b9h565h5ch595h5c9h695q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgk9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:01:55 crc kubenswrapper[4896]: E0126 16:01:55.859377 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" Jan 26 16:01:56 crc kubenswrapper[4896]: I0126 16:01:56.007527 4896 generic.go:334] "Generic (PLEG): container finished" podID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerID="7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2" exitCode=143 Jan 26 16:01:56 crc kubenswrapper[4896]: I0126 16:01:56.008957 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerDied","Data":"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2"} Jan 26 16:01:56 crc kubenswrapper[4896]: E0126 16:01:56.014226 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3" Jan 26 16:01:56 crc kubenswrapper[4896]: I0126 16:01:56.166717 4896 scope.go:117] "RemoveContainer" containerID="13038f4b1fc861b0fb52f7a8a7fbca95f80bc8c3a2579a9f7d180e5953a06057" Jan 26 16:01:56 crc kubenswrapper[4896]: I0126 16:01:56.456755 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: connect: connection refused" Jan 26 16:01:56 crc kubenswrapper[4896]: I0126 16:01:56.502395 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.213:8776/healthcheck\": dial tcp 10.217.0.213:8776: connect: connection refused" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.076243 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" event={"ID":"1b2e158c-acf0-4642-b31b-4db17087c69c","Type":"ContainerStarted","Data":"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96"} Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.076843 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.076562 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" podUID="1b2e158c-acf0-4642-b31b-4db17087c69c" containerName="heat-cfnapi" containerID="cri-o://e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96" gracePeriod=60 Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.082722 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" event={"ID":"08dd6673-2fbb-4bb0-ab7b-5b441d18684d","Type":"ContainerDied","Data":"f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820"} Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.082770 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6dd40597a4347b34a874d4119729db77505aab5d546ebc9b534026f93bdc820" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.105130 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" podStartSLOduration=3.994177427 podStartE2EDuration="23.105096638s" podCreationTimestamp="2026-01-26 16:01:34 +0000 UTC" firstStartedPulling="2026-01-26 16:01:37.105531814 +0000 UTC m=+1654.887412207" lastFinishedPulling="2026-01-26 16:01:56.216451025 +0000 UTC m=+1673.998331418" observedRunningTime="2026-01-26 16:01:57.101657681 +0000 UTC m=+1674.883538074" watchObservedRunningTime="2026-01-26 16:01:57.105096638 +0000 UTC m=+1674.886977031" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.106868 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75c68767d8-c7w2z" event={"ID":"38c3367b-9b2c-40a4-841d-7b815bbfd45a","Type":"ContainerStarted","Data":"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5"} Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.107060 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-75c68767d8-c7w2z" podUID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" containerName="heat-api" containerID="cri-o://e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5" gracePeriod=60 Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.107222 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.153862 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-75c68767d8-c7w2z" podStartSLOduration=3.360227047 podStartE2EDuration="23.153838527s" podCreationTimestamp="2026-01-26 16:01:34 +0000 UTC" firstStartedPulling="2026-01-26 16:01:36.422117236 +0000 UTC m=+1654.203997629" lastFinishedPulling="2026-01-26 16:01:56.215728716 +0000 UTC m=+1673.997609109" observedRunningTime="2026-01-26 16:01:57.132972526 +0000 UTC m=+1674.914852919" watchObservedRunningTime="2026-01-26 16:01:57.153838527 +0000 UTC m=+1674.935718920" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.221592 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259331 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259413 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259495 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259558 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259650 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.259829 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jvtj\" (UniqueName: \"kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj\") pod \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\" (UID: \"08dd6673-2fbb-4bb0-ab7b-5b441d18684d\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.559572 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj" (OuterVolumeSpecName: "kube-api-access-6jvtj") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "kube-api-access-6jvtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.759142 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.771157 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jvtj\" (UniqueName: \"kubernetes.io/projected/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-kube-api-access-6jvtj\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877036 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcltp\" (UniqueName: \"kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877153 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877238 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877315 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877364 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877396 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.877430 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle\") pod \"05468d2e-2ac7-45f3-973a-ff9c4559701e\" (UID: \"05468d2e-2ac7-45f3-973a-ff9c4559701e\") " Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.879755 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.888954 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.891337 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.892448 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs" (OuterVolumeSpecName: "logs") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.892844 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp" (OuterVolumeSpecName: "kube-api-access-fcltp") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "kube-api-access-fcltp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.893347 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.944268 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.986507 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts" (OuterVolumeSpecName: "scripts") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.992894 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.992941 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcltp\" (UniqueName: \"kubernetes.io/projected/05468d2e-2ac7-45f3-973a-ff9c4559701e-kube-api-access-fcltp\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.992958 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.992974 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05468d2e-2ac7-45f3-973a-ff9c4559701e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.992992 4896 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05468d2e-2ac7-45f3-973a-ff9c4559701e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.993004 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.993017 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:57 crc kubenswrapper[4896]: I0126 16:01:57.993029 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.050195 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.076227 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config" (OuterVolumeSpecName: "config") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.089764 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data" (OuterVolumeSpecName: "config-data") pod "05468d2e-2ac7-45f3-973a-ff9c4559701e" (UID: "05468d2e-2ac7-45f3-973a-ff9c4559701e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.093288 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08dd6673-2fbb-4bb0-ab7b-5b441d18684d" (UID: "08dd6673-2fbb-4bb0-ab7b-5b441d18684d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.096314 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.096341 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.096353 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08dd6673-2fbb-4bb0-ab7b-5b441d18684d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.096362 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05468d2e-2ac7-45f3-973a-ff9c4559701e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.459828 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.485318 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.524445 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f84f676c-wb8k9" event={"ID":"29d61a7c-cfad-454b-a105-c5e589c34488","Type":"ContainerStarted","Data":"82bff0781e3b32f9e54b567dfcb7dd5307a36b60a6e51fe056e3ae9fac1a6e83"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.543789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-665cc7757b-8rh2l" event={"ID":"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839","Type":"ContainerStarted","Data":"df06ef4b33d8a665d6eb1053ab3def221f28387c7990dc01363c510cf2a070c0"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.554474 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05468d2e-2ac7-45f3-973a-ff9c4559701e","Type":"ContainerDied","Data":"508c45d4cdb4691ab33794ebaab5148972cce5347cc27b933dce3e2f8fdf6540"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.554527 4896 scope.go:117] "RemoveContainer" containerID="5d19857ed92b9b0e8b025b983d4f2307acff2a3fb76ae28611366e1eef231c2b" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.554761 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.571116 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.576639 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerStarted","Data":"d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.580537 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756b77457-pgv8j" event={"ID":"19034db0-0d0e-4a65-b91d-890180a924f2","Type":"ContainerStarted","Data":"4cb8a3ffb414880fb18c9c4659b87c32a9ef3ccbcbbfebcc67edf0e956808f4e"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.584006 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5v2tk" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.584088 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerStarted","Data":"eec8adc70e664532012e0f3bf05659d15e47aeced1afed00d3c13a1af0a69674"} Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.641103 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.661398 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.674604 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.677777 4896 scope.go:117] "RemoveContainer" containerID="888feea7b66b78554c3bec85e17ccb1ef5b34e56fd003197d92558c89f4c3ec3" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.721902 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.741525 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.803059 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" path="/var/lib/kubelet/pods/05468d2e-2ac7-45f3-973a-ff9c4559701e/volumes" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.804014 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:58 crc kubenswrapper[4896]: E0126 16:01:58.804577 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="init" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.804631 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="init" Jan 26 16:01:58 crc kubenswrapper[4896]: E0126 16:01:58.804667 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api-log" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.804678 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api-log" Jan 26 16:01:58 crc kubenswrapper[4896]: E0126 16:01:58.804689 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.804695 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" Jan 26 16:01:58 crc kubenswrapper[4896]: E0126 16:01:58.804732 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.804740 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.805045 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" containerName="dnsmasq-dns" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.805085 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api-log" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.805096 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="05468d2e-2ac7-45f3-973a-ff9c4559701e" containerName="cinder-api" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.807453 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.807489 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.807739 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.824313 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.825754 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.826014 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 16:01:58 crc kubenswrapper[4896]: I0126 16:01:58.853416 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5v2tk"] Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155402 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8217b6eb-3002-43a0-a26e-55003835c995-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155467 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/8217b6eb-3002-43a0-a26e-55003835c995-kube-api-access-p8nvj\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-scripts\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155656 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8217b6eb-3002-43a0-a26e-55003835c995-logs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155696 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data-custom\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155718 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155787 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155809 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.155852 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267194 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-scripts\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267303 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8217b6eb-3002-43a0-a26e-55003835c995-logs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267352 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data-custom\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267391 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267461 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267490 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267534 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267625 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8217b6eb-3002-43a0-a26e-55003835c995-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.267671 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/8217b6eb-3002-43a0-a26e-55003835c995-kube-api-access-p8nvj\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.268989 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8217b6eb-3002-43a0-a26e-55003835c995-logs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.269073 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8217b6eb-3002-43a0-a26e-55003835c995-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.295899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.296255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.297139 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-scripts\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.305507 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data-custom\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.305520 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-config-data\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.311234 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8217b6eb-3002-43a0-a26e-55003835c995-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.345178 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/8217b6eb-3002-43a0-a26e-55003835c995-kube-api-access-p8nvj\") pod \"cinder-api-0\" (UID: \"8217b6eb-3002-43a0-a26e-55003835c995\") " pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.465317 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.548236 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.679684 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.679752 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.679801 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.679855 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.680254 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.680422 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt99g\" (UniqueName: \"kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.680747 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.680794 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.683246 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d911d99d-a84d-4aa0-ae95-8a840d2822ce","Type":"ContainerStarted","Data":"ac41420f96d6947bb6179b17540ae6c840a14adccf8b0539e0783231ea6e06ab"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.684947 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs" (OuterVolumeSpecName: "logs") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.685322 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.693853 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g" (OuterVolumeSpecName: "kube-api-access-wt99g") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "kube-api-access-wt99g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.714040 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerStarted","Data":"d11cf69cddc760340e33b565c89921497869e6305ac86d2bbf0363ca40f57b02"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.716340 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.719858 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts" (OuterVolumeSpecName: "scripts") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.750349 4896 generic.go:334] "Generic (PLEG): container finished" podID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerID="f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9" exitCode=0 Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.750469 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerDied","Data":"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.750503 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de7d907a-484d-42e5-88d9-61b398fe83a5","Type":"ContainerDied","Data":"656d0795cbf34ec1608f2439afa8694a9964f65299cbf8ab4f9587163e6cbdd5"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.750524 4896 scope.go:117] "RemoveContainer" containerID="f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.750712 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.767230 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-665cc7757b-8rh2l" event={"ID":"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839","Type":"ContainerStarted","Data":"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.767817 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.771712 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" podStartSLOduration=18.771688675 podStartE2EDuration="18.771688675s" podCreationTimestamp="2026-01-26 16:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:59.76797624 +0000 UTC m=+1677.549856633" watchObservedRunningTime="2026-01-26 16:01:59.771688675 +0000 UTC m=+1677.553569078" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.783579 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt99g\" (UniqueName: \"kubernetes.io/projected/de7d907a-484d-42e5-88d9-61b398fe83a5-kube-api-access-wt99g\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.783613 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.783623 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.783631 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de7d907a-484d-42e5-88d9-61b398fe83a5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.832959 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-665cc7757b-8rh2l" podStartSLOduration=18.832935542 podStartE2EDuration="18.832935542s" podCreationTimestamp="2026-01-26 16:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:59.797594093 +0000 UTC m=+1677.579474486" watchObservedRunningTime="2026-01-26 16:01:59.832935542 +0000 UTC m=+1677.614815935" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.839798 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" event={"ID":"8e3ce827-294a-4320-9545-c01e6aa46bbb","Type":"ContainerStarted","Data":"7cc60ad36b9e58b0eaaea234c69270176f5fb7da8a8f200aae7631dac5db3544"} Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.885248 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea" (OuterVolumeSpecName: "glance") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: E0126 16:01:59.885818 4896 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") : UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/vol_data.json]: open /var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"de7d907a-484d-42e5-88d9-61b398fe83a5\" (UID: \"de7d907a-484d-42e5-88d9-61b398fe83a5\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/vol_data.json]: open /var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes/kubernetes.io~csi/pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea/vol_data.json: no such file or directory" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.887568 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") on node \"crc\" " Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.891507 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:59 crc kubenswrapper[4896]: I0126 16:01:59.995101 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.102088 4896 scope.go:117] "RemoveContainer" containerID="7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.148207 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.211438 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.386563 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.521497 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.521675 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea") on node "crc" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.524424 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.623536 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data" (OuterVolumeSpecName: "config-data") pod "de7d907a-484d-42e5-88d9-61b398fe83a5" (UID: "de7d907a-484d-42e5-88d9-61b398fe83a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:00 crc kubenswrapper[4896]: I0126 16:02:00.626768 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de7d907a-484d-42e5-88d9-61b398fe83a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.069576 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dd6673-2fbb-4bb0-ab7b-5b441d18684d" path="/var/lib/kubelet/pods/08dd6673-2fbb-4bb0-ab7b-5b441d18684d/volumes" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.189907 4896 scope.go:117] "RemoveContainer" containerID="f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9" Jan 26 16:02:01 crc kubenswrapper[4896]: E0126 16:02:01.196109 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9\": container with ID starting with f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9 not found: ID does not exist" containerID="f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.196163 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9"} err="failed to get container status \"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9\": rpc error: code = NotFound desc = could not find container \"f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9\": container with ID starting with f04950240259b2ed157dead4448203e2b879ce9069d371a418df7458269f8cd9 not found: ID does not exist" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.196198 4896 scope.go:117] "RemoveContainer" containerID="7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.196401 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756b77457-pgv8j" event={"ID":"19034db0-0d0e-4a65-b91d-890180a924f2","Type":"ContainerStarted","Data":"68f2e4b14c6a8445b80489d249bbbdd906fc152724b924165dff4e66aaf13944"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.197881 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:02:01 crc kubenswrapper[4896]: E0126 16:02:01.205866 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2\": container with ID starting with 7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2 not found: ID does not exist" containerID="7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.205911 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2"} err="failed to get container status \"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2\": rpc error: code = NotFound desc = could not find container \"7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2\": container with ID starting with 7f20a2a1df2c49dd3758d5f4f320d69c630c4cfd7853c5df28da5848064e3be2 not found: ID does not exist" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.206162 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.210985 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerID="d11cf69cddc760340e33b565c89921497869e6305ac86d2bbf0363ca40f57b02" exitCode=1 Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.211077 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerDied","Data":"d11cf69cddc760340e33b565c89921497869e6305ac86d2bbf0363ca40f57b02"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.212169 4896 scope.go:117] "RemoveContainer" containerID="d11cf69cddc760340e33b565c89921497869e6305ac86d2bbf0363ca40f57b02" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.216614 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.221828 4896 generic.go:334] "Generic (PLEG): container finished" podID="29d61a7c-cfad-454b-a105-c5e589c34488" containerID="a56a3e6bf2d074fe3269337ee90a7b060da7921acb13f8aaa8a6e355ca96854f" exitCode=1 Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.221899 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f84f676c-wb8k9" event={"ID":"29d61a7c-cfad-454b-a105-c5e589c34488","Type":"ContainerDied","Data":"a56a3e6bf2d074fe3269337ee90a7b060da7921acb13f8aaa8a6e355ca96854f"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.222792 4896 scope.go:117] "RemoveContainer" containerID="a56a3e6bf2d074fe3269337ee90a7b060da7921acb13f8aaa8a6e355ca96854f" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.246299 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8217b6eb-3002-43a0-a26e-55003835c995","Type":"ContainerStarted","Data":"36f0bc5d7d1a307548be1dc52c51b84db273965b0b8d3297f69fea3f2395b1be"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.252252 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7756b77457-pgv8j" podStartSLOduration=16.2522228 podStartE2EDuration="16.2522228s" podCreationTimestamp="2026-01-26 16:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:01.228024055 +0000 UTC m=+1679.009904448" watchObservedRunningTime="2026-01-26 16:02:01.2522228 +0000 UTC m=+1679.034103193" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.274044 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerStarted","Data":"bd6432bf9b73a1fe741ba740674f961655868cd10553f492a5291ec14e125f8b"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.294751 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:02:01 crc kubenswrapper[4896]: E0126 16:02:01.295367 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-log" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.295383 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-log" Jan 26 16:02:01 crc kubenswrapper[4896]: E0126 16:02:01.295409 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-httpd" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.295415 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-httpd" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.295786 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-log" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.295823 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" containerName="glance-httpd" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.297472 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.302505 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.302766 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.314304 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.320971 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" event={"ID":"8e3ce827-294a-4320-9545-c01e6aa46bbb","Type":"ContainerStarted","Data":"f236bb30a02f8b23734b4228c03d6b1b400dfa9431f3e94e0491aedcc5c83b9d"} Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.321103 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.389783 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" podStartSLOduration=16.389762876 podStartE2EDuration="16.389762876s" podCreationTimestamp="2026-01-26 16:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:01.349374299 +0000 UTC m=+1679.131254702" watchObservedRunningTime="2026-01-26 16:02:01.389762876 +0000 UTC m=+1679.171643269" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495184 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495341 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495392 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27lw\" (UniqueName: \"kubernetes.io/projected/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-kube-api-access-h27lw\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495421 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495502 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495549 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495577 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.495654 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.601714 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.602134 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.602207 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27lw\" (UniqueName: \"kubernetes.io/projected/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-kube-api-access-h27lw\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.602249 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.603696 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.603773 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.603833 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.603941 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.605807 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.605852 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.606412 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.609335 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.613132 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.620496 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.623538 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.623605 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca73410178097169f340b5a8c67a8781e7a4252415873019f27420073d85ffa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.629438 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27lw\" (UniqueName: \"kubernetes.io/projected/08bc5795-ac12-43ba-9d58-d4a738f0c4ed-kube-api-access-h27lw\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.716684 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5068aa0-6316-400d-ad6b-1f3bdba48aea\") pod \"glance-default-internal-api-0\" (UID: \"08bc5795-ac12-43ba-9d58-d4a738f0c4ed\") " pod="openstack/glance-default-internal-api-0" Jan 26 16:02:01 crc kubenswrapper[4896]: I0126 16:02:01.939984 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.253678 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.313753 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.313828 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.474657 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerStarted","Data":"ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5"} Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.475130 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.535795 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f84f676c-wb8k9" event={"ID":"29d61a7c-cfad-454b-a105-c5e589c34488","Type":"ContainerStarted","Data":"86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e"} Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.536116 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.632013 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8217b6eb-3002-43a0-a26e-55003835c995","Type":"ContainerStarted","Data":"e74dd6b8198979f7112472f86ea7bc3b33fd7684e09d83f3d28815484c3d6a85"} Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.636644 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-57f84f676c-wb8k9" podStartSLOduration=21.636621492 podStartE2EDuration="21.636621492s" podCreationTimestamp="2026-01-26 16:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:02.590177622 +0000 UTC m=+1680.372058015" watchObservedRunningTime="2026-01-26 16:02:02.636621492 +0000 UTC m=+1680.418501885" Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.649674 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerStarted","Data":"98edb0b5c8bb7ddd262dc77c659e3523889d84fca18cdb6fa8fb046094b800b9"} Jan 26 16:02:02 crc kubenswrapper[4896]: I0126 16:02:02.669878 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d911d99d-a84d-4aa0-ae95-8a840d2822ce","Type":"ContainerStarted","Data":"f7baa0d7825721ca0e78401c6caf365d87b0c24d38dc8861eb7781abd6aef09b"} Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.028543 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7d907a-484d-42e5-88d9-61b398fe83a5" path="/var/lib/kubelet/pods/de7d907a-484d-42e5-88d9-61b398fe83a5/volumes" Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.030920 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.707771 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08bc5795-ac12-43ba-9d58-d4a738f0c4ed","Type":"ContainerStarted","Data":"266d6bacb64f83a5e82d3de7424e6a1c79b493fc914b0bdf96b86de96337a853"} Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.751900 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d911d99d-a84d-4aa0-ae95-8a840d2822ce","Type":"ContainerStarted","Data":"a807a5395d126ad475115aed4f0caf2f21db27070a86e743835ff0b322eb6421"} Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.790051 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerID="ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5" exitCode=1 Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.790147 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerDied","Data":"ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5"} Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.797152 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=23.797129044 podStartE2EDuration="23.797129044s" podCreationTimestamp="2026-01-26 16:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:03.782273856 +0000 UTC m=+1681.564154269" watchObservedRunningTime="2026-01-26 16:02:03.797129044 +0000 UTC m=+1681.579009437" Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.797291 4896 scope.go:117] "RemoveContainer" containerID="d11cf69cddc760340e33b565c89921497869e6305ac86d2bbf0363ca40f57b02" Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.798170 4896 scope.go:117] "RemoveContainer" containerID="ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5" Jan 26 16:02:03 crc kubenswrapper[4896]: E0126 16:02:03.798458 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5597f886c8-sbvdp_openstack(ab92f8c3-378a-41fe-97c0-533d45e1a4a5)\"" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.798835 4896 generic.go:334] "Generic (PLEG): container finished" podID="29d61a7c-cfad-454b-a105-c5e589c34488" containerID="86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e" exitCode=1 Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.799116 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f84f676c-wb8k9" event={"ID":"29d61a7c-cfad-454b-a105-c5e589c34488","Type":"ContainerDied","Data":"86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e"} Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.799632 4896 scope.go:117] "RemoveContainer" containerID="86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e" Jan 26 16:02:03 crc kubenswrapper[4896]: E0126 16:02:03.799993 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f84f676c-wb8k9_openstack(29d61a7c-cfad-454b-a105-c5e589c34488)\"" pod="openstack/heat-api-57f84f676c-wb8k9" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" Jan 26 16:02:03 crc kubenswrapper[4896]: I0126 16:02:03.947663 4896 scope.go:117] "RemoveContainer" containerID="a56a3e6bf2d074fe3269337ee90a7b060da7921acb13f8aaa8a6e355ca96854f" Jan 26 16:02:04 crc kubenswrapper[4896]: I0126 16:02:04.835223 4896 scope.go:117] "RemoveContainer" containerID="ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5" Jan 26 16:02:04 crc kubenswrapper[4896]: E0126 16:02:04.835872 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5597f886c8-sbvdp_openstack(ab92f8c3-378a-41fe-97c0-533d45e1a4a5)\"" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" Jan 26 16:02:04 crc kubenswrapper[4896]: I0126 16:02:04.854070 4896 scope.go:117] "RemoveContainer" containerID="86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e" Jan 26 16:02:04 crc kubenswrapper[4896]: E0126 16:02:04.854417 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f84f676c-wb8k9_openstack(29d61a7c-cfad-454b-a105-c5e589c34488)\"" pod="openstack/heat-api-57f84f676c-wb8k9" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" Jan 26 16:02:04 crc kubenswrapper[4896]: I0126 16:02:04.880939 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8217b6eb-3002-43a0-a26e-55003835c995","Type":"ContainerStarted","Data":"908f0077467085312de978cefd180b21586c843c8eb6b96685ef47cccad50ad8"} Jan 26 16:02:04 crc kubenswrapper[4896]: I0126 16:02:04.881201 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 16:02:04 crc kubenswrapper[4896]: I0126 16:02:04.909487 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.909472027 podStartE2EDuration="6.909472027s" podCreationTimestamp="2026-01-26 16:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:04.906936167 +0000 UTC m=+1682.688816560" watchObservedRunningTime="2026-01-26 16:02:04.909472027 +0000 UTC m=+1682.691352420" Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.943374 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerStarted","Data":"b91f1d24bd41aa97dbd80ee8caa271299eade592da41bef18add8fd7fc65366d"} Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.944127 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-central-agent" containerID="cri-o://d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f" gracePeriod=30 Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.944606 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.945525 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="proxy-httpd" containerID="cri-o://b91f1d24bd41aa97dbd80ee8caa271299eade592da41bef18add8fd7fc65366d" gracePeriod=30 Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.945717 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="sg-core" containerID="cri-o://98edb0b5c8bb7ddd262dc77c659e3523889d84fca18cdb6fa8fb046094b800b9" gracePeriod=30 Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.945778 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-notification-agent" containerID="cri-o://bd6432bf9b73a1fe741ba740674f961655868cd10553f492a5291ec14e125f8b" gracePeriod=30 Jan 26 16:02:05 crc kubenswrapper[4896]: I0126 16:02:05.964572 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08bc5795-ac12-43ba-9d58-d4a738f0c4ed","Type":"ContainerStarted","Data":"4c0b858c8b8a1ac0d5cca0eb81b43e99545f75ee40342f5e1e93e1a2a6a4125f"} Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.147282 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.147714 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d911d99d-a84d-4aa0-ae95-8a840d2822ce" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.223:8080/\": dial tcp 10.217.0.223:8080: connect: connection refused" Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.981790 4896 generic.go:334] "Generic (PLEG): container finished" podID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerID="b91f1d24bd41aa97dbd80ee8caa271299eade592da41bef18add8fd7fc65366d" exitCode=0 Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.983132 4896 generic.go:334] "Generic (PLEG): container finished" podID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerID="98edb0b5c8bb7ddd262dc77c659e3523889d84fca18cdb6fa8fb046094b800b9" exitCode=2 Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.981929 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerDied","Data":"b91f1d24bd41aa97dbd80ee8caa271299eade592da41bef18add8fd7fc65366d"} Jan 26 16:02:06 crc kubenswrapper[4896]: I0126 16:02:06.983365 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerDied","Data":"98edb0b5c8bb7ddd262dc77c659e3523889d84fca18cdb6fa8fb046094b800b9"} Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.293965 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.309130 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.310674 4896 scope.go:117] "RemoveContainer" containerID="ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5" Jan 26 16:02:07 crc kubenswrapper[4896]: E0126 16:02:07.311027 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5597f886c8-sbvdp_openstack(ab92f8c3-378a-41fe-97c0-533d45e1a4a5)\"" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.314435 4896 scope.go:117] "RemoveContainer" containerID="86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e" Jan 26 16:02:07 crc kubenswrapper[4896]: E0126 16:02:07.315146 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f84f676c-wb8k9_openstack(29d61a7c-cfad-454b-a105-c5e589c34488)\"" pod="openstack/heat-api-57f84f676c-wb8k9" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.997555 4896 generic.go:334] "Generic (PLEG): container finished" podID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerID="bd6432bf9b73a1fe741ba740674f961655868cd10553f492a5291ec14e125f8b" exitCode=0 Jan 26 16:02:07 crc kubenswrapper[4896]: I0126 16:02:07.997619 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerDied","Data":"bd6432bf9b73a1fe741ba740674f961655868cd10553f492a5291ec14e125f8b"} Jan 26 16:02:08 crc kubenswrapper[4896]: I0126 16:02:08.202149 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:02:08 crc kubenswrapper[4896]: I0126 16:02:08.219563 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=21.032297081 podStartE2EDuration="29.219539766s" podCreationTimestamp="2026-01-26 16:01:39 +0000 UTC" firstStartedPulling="2026-01-26 16:01:56.031111864 +0000 UTC m=+1673.812992257" lastFinishedPulling="2026-01-26 16:02:04.218354549 +0000 UTC m=+1682.000234942" observedRunningTime="2026-01-26 16:02:06.120423611 +0000 UTC m=+1683.902303994" watchObservedRunningTime="2026-01-26 16:02:08.219539766 +0000 UTC m=+1686.001420159" Jan 26 16:02:08 crc kubenswrapper[4896]: I0126 16:02:08.508382 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:02:08 crc kubenswrapper[4896]: I0126 16:02:08.831182 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:02:08 crc kubenswrapper[4896]: I0126 16:02:08.968358 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.105184 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08bc5795-ac12-43ba-9d58-d4a738f0c4ed","Type":"ContainerStarted","Data":"400a841d8a4e0970384ecd93f0f4a03e07f988fd954c6f3a92754d83f58a3d3d"} Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.161133 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.161110138 podStartE2EDuration="8.161110138s" podCreationTimestamp="2026-01-26 16:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:09.137612011 +0000 UTC m=+1686.919492404" watchObservedRunningTime="2026-01-26 16:02:09.161110138 +0000 UTC m=+1686.942990541" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.609446 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.692054 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.698221 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.800047 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle\") pod \"29d61a7c-cfad-454b-a105-c5e589c34488\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.800550 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom\") pod \"29d61a7c-cfad-454b-a105-c5e589c34488\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.800666 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hspnm\" (UniqueName: \"kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm\") pod \"29d61a7c-cfad-454b-a105-c5e589c34488\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.800939 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data\") pod \"29d61a7c-cfad-454b-a105-c5e589c34488\" (UID: \"29d61a7c-cfad-454b-a105-c5e589c34488\") " Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.808478 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29d61a7c-cfad-454b-a105-c5e589c34488" (UID: "29d61a7c-cfad-454b-a105-c5e589c34488"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.811821 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm" (OuterVolumeSpecName: "kube-api-access-hspnm") pod "29d61a7c-cfad-454b-a105-c5e589c34488" (UID: "29d61a7c-cfad-454b-a105-c5e589c34488"). InnerVolumeSpecName "kube-api-access-hspnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.868736 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29d61a7c-cfad-454b-a105-c5e589c34488" (UID: "29d61a7c-cfad-454b-a105-c5e589c34488"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.879969 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data" (OuterVolumeSpecName: "config-data") pod "29d61a7c-cfad-454b-a105-c5e589c34488" (UID: "29d61a7c-cfad-454b-a105-c5e589c34488"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.903554 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.903605 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hspnm\" (UniqueName: \"kubernetes.io/projected/29d61a7c-cfad-454b-a105-c5e589c34488-kube-api-access-hspnm\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.903619 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:09 crc kubenswrapper[4896]: I0126 16:02:09.903628 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d61a7c-cfad-454b-a105-c5e589c34488-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.079465 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.123434 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f84f676c-wb8k9" event={"ID":"29d61a7c-cfad-454b-a105-c5e589c34488","Type":"ContainerDied","Data":"82bff0781e3b32f9e54b567dfcb7dd5307a36b60a6e51fe056e3ae9fac1a6e83"} Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.123514 4896 scope.go:117] "RemoveContainer" containerID="86f7741e669959726d51a5840b60721194ef0974b799cef9cc531ad6034e881e" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.124119 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f84f676c-wb8k9" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.135321 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3","Type":"ContainerStarted","Data":"9d2899f34675ff970e271a7f1edd26387ac6e714ccee0fb0b93efd22028d078d"} Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.141638 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.142151 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5597f886c8-sbvdp" event={"ID":"ab92f8c3-378a-41fe-97c0-533d45e1a4a5","Type":"ContainerDied","Data":"eec8adc70e664532012e0f3bf05659d15e47aeced1afed00d3c13a1af0a69674"} Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.166681 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.513076479 podStartE2EDuration="48.166658256s" podCreationTimestamp="2026-01-26 16:01:22 +0000 UTC" firstStartedPulling="2026-01-26 16:01:24.665825141 +0000 UTC m=+1642.447705534" lastFinishedPulling="2026-01-26 16:02:09.319406918 +0000 UTC m=+1687.101287311" observedRunningTime="2026-01-26 16:02:10.153604872 +0000 UTC m=+1687.935485285" watchObservedRunningTime="2026-01-26 16:02:10.166658256 +0000 UTC m=+1687.948538649" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.187838 4896 scope.go:117] "RemoveContainer" containerID="ed56748b9e3d84a63b28cb7f116851f8eab2f9d1f05ed381fb2352fc2d4fc1d5" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.208963 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom\") pod \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.209404 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l98kh\" (UniqueName: \"kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh\") pod \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.209601 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data\") pod \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.209793 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle\") pod \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\" (UID: \"ab92f8c3-378a-41fe-97c0-533d45e1a4a5\") " Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.217175 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.221170 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh" (OuterVolumeSpecName: "kube-api-access-l98kh") pod "ab92f8c3-378a-41fe-97c0-533d45e1a4a5" (UID: "ab92f8c3-378a-41fe-97c0-533d45e1a4a5"). InnerVolumeSpecName "kube-api-access-l98kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.221328 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab92f8c3-378a-41fe-97c0-533d45e1a4a5" (UID: "ab92f8c3-378a-41fe-97c0-533d45e1a4a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.227982 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-57f84f676c-wb8k9"] Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.270705 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab92f8c3-378a-41fe-97c0-533d45e1a4a5" (UID: "ab92f8c3-378a-41fe-97c0-533d45e1a4a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.293757 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data" (OuterVolumeSpecName: "config-data") pod "ab92f8c3-378a-41fe-97c0-533d45e1a4a5" (UID: "ab92f8c3-378a-41fe-97c0-533d45e1a4a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.313273 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.313317 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l98kh\" (UniqueName: \"kubernetes.io/projected/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-kube-api-access-l98kh\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.313332 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.313344 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab92f8c3-378a-41fe-97c0-533d45e1a4a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.503820 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.519413 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5597f886c8-sbvdp"] Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.772977 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" path="/var/lib/kubelet/pods/29d61a7c-cfad-454b-a105-c5e589c34488/volumes" Jan 26 16:02:10 crc kubenswrapper[4896]: I0126 16:02:10.774994 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" path="/var/lib/kubelet/pods/ab92f8c3-378a-41fe-97c0-533d45e1a4a5/volumes" Jan 26 16:02:11 crc kubenswrapper[4896]: E0126 16:02:11.066180 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0479be0e_3e42_4009_8728_ed8607ac7eaf.slice/crio-d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0479be0e_3e42_4009_8728_ed8607ac7eaf.slice/crio-conmon-d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.195845 4896 generic.go:334] "Generic (PLEG): container finished" podID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerID="d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f" exitCode=0 Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.195986 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerDied","Data":"d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f"} Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.473757 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.645690 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.645748 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.645998 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.646051 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.646108 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.646153 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.646172 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqkz4\" (UniqueName: \"kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4\") pod \"0479be0e-3e42-4009-8728-ed8607ac7eaf\" (UID: \"0479be0e-3e42-4009-8728-ed8607ac7eaf\") " Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.647106 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.651057 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.665183 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts" (OuterVolumeSpecName: "scripts") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.672800 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4" (OuterVolumeSpecName: "kube-api-access-cqkz4") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "kube-api-access-cqkz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.698555 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.751607 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.751655 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.751667 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0479be0e-3e42-4009-8728-ed8607ac7eaf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.751687 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.751700 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqkz4\" (UniqueName: \"kubernetes.io/projected/0479be0e-3e42-4009-8728-ed8607ac7eaf-kube-api-access-cqkz4\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.774553 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.824461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.855306 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.892320 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data" (OuterVolumeSpecName: "config-data") pod "0479be0e-3e42-4009-8728-ed8607ac7eaf" (UID: "0479be0e-3e42-4009-8728-ed8607ac7eaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.941649 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.941696 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:11 crc kubenswrapper[4896]: I0126 16:02:11.958060 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0479be0e-3e42-4009-8728-ed8607ac7eaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.001241 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.004969 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.225506 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.226141 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0479be0e-3e42-4009-8728-ed8607ac7eaf","Type":"ContainerDied","Data":"9c4357e08cecb6f596c2f68ed567b7e0ac7909a578b508c65ddd95e1fa9ccd9d"} Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.226197 4896 scope.go:117] "RemoveContainer" containerID="b91f1d24bd41aa97dbd80ee8caa271299eade592da41bef18add8fd7fc65366d" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.226560 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.228012 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.228045 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.328313 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.328636 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6f79bf9b96-md4vg" podUID="9be68e33-e343-492f-913b-098163a26f87" containerName="heat-engine" containerID="cri-o://66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" gracePeriod=60 Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.343686 4896 scope.go:117] "RemoveContainer" containerID="98edb0b5c8bb7ddd262dc77c659e3523889d84fca18cdb6fa8fb046094b800b9" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.362305 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.372509 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.385616 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388359 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388391 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388404 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388411 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388435 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-central-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388441 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-central-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388451 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-notification-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388458 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-notification-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388471 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="sg-core" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388477 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="sg-core" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388491 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388497 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388510 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388516 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: E0126 16:02:12.388530 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="proxy-httpd" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388535 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="proxy-httpd" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388807 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-notification-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388826 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388835 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab92f8c3-378a-41fe-97c0-533d45e1a4a5" containerName="heat-cfnapi" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388845 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="ceilometer-central-agent" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388858 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388869 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="sg-core" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388880 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d61a7c-cfad-454b-a105-c5e589c34488" containerName="heat-api" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.388888 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" containerName="proxy-httpd" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.391036 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.392188 4896 scope.go:117] "RemoveContainer" containerID="bd6432bf9b73a1fe741ba740674f961655868cd10553f492a5291ec14e125f8b" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.393569 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.406241 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.413783 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.473473 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.473829 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.474076 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.474379 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.474527 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.474782 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.474923 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcq65\" (UniqueName: \"kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.500454 4896 scope.go:117] "RemoveContainer" containerID="d58c14114e65309815f8aa2f6c3b328b49488c43500d0e43430394a67f58cf8f" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.576706 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.576756 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcq65\" (UniqueName: \"kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.576811 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.576833 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.576911 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.577008 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.577027 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.577912 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.578075 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.583687 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.584395 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.585210 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.587431 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.611201 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcq65\" (UniqueName: \"kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65\") pod \"ceilometer-0\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " pod="openstack/ceilometer-0" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.778778 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0479be0e-3e42-4009-8728-ed8607ac7eaf" path="/var/lib/kubelet/pods/0479be0e-3e42-4009-8728-ed8607ac7eaf/volumes" Jan 26 16:02:12 crc kubenswrapper[4896]: I0126 16:02:12.801856 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:13 crc kubenswrapper[4896]: I0126 16:02:13.221412 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 16:02:13 crc kubenswrapper[4896]: I0126 16:02:13.466953 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:13 crc kubenswrapper[4896]: I0126 16:02:13.467252 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-log" containerID="cri-o://b41bb5e603893cc3aa9c16e94a3c02abfb8c32fb59fca28fb1f3b05a94ec03b6" gracePeriod=30 Jan 26 16:02:13 crc kubenswrapper[4896]: I0126 16:02:13.467418 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-httpd" containerID="cri-o://54f84ac432b72822fa3bca3258b874d0b0748c5abd656e9d98520dff062c7ac3" gracePeriod=30 Jan 26 16:02:13 crc kubenswrapper[4896]: I0126 16:02:13.538840 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.317909 4896 generic.go:334] "Generic (PLEG): container finished" podID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerID="b41bb5e603893cc3aa9c16e94a3c02abfb8c32fb59fca28fb1f3b05a94ec03b6" exitCode=143 Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.317992 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerDied","Data":"b41bb5e603893cc3aa9c16e94a3c02abfb8c32fb59fca28fb1f3b05a94ec03b6"} Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.320513 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerStarted","Data":"bd209784af1624408418ecfd34f1bc7a28edc355c93b3bfee70f99f56e615c89"} Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.320553 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:02:14 crc kubenswrapper[4896]: E0126 16:02:14.451677 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:02:14 crc kubenswrapper[4896]: E0126 16:02:14.455636 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:02:14 crc kubenswrapper[4896]: E0126 16:02:14.456910 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:02:14 crc kubenswrapper[4896]: E0126 16:02:14.456954 4896 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6f79bf9b96-md4vg" podUID="9be68e33-e343-492f-913b-098163a26f87" containerName="heat-engine" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.735196 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bb9fc"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.737026 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.791978 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bb9fc"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.856705 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zmxcl"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.859199 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.867910 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zmxcl"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.869662 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.869714 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4qbd\" (UniqueName: \"kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.957222 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-93c3-account-create-update-rpzrm"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.959299 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.969002 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.971751 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-93c3-account-create-update-rpzrm"] Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.971932 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb44j\" (UniqueName: \"kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.972222 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.972336 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.972391 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4qbd\" (UniqueName: \"kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:14 crc kubenswrapper[4896]: I0126 16:02:14.973564 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.009358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4qbd\" (UniqueName: \"kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd\") pod \"nova-api-db-create-bb9fc\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.185464 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.187479 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb44j\" (UniqueName: \"kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.188447 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.203229 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qw854"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.211253 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.212748 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.247541 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qw854"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.258476 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb44j\" (UniqueName: \"kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j\") pod \"nova-cell0-db-create-zmxcl\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.306506 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwc7k\" (UniqueName: \"kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.306906 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.307074 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.307217 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76s9v\" (UniqueName: \"kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.384789 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerStarted","Data":"97da7c66fb1cefd52174b393ae0e840fd46b93e14fbd5856a8ec788441018366"} Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.411607 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.411767 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76s9v\" (UniqueName: \"kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.411890 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwc7k\" (UniqueName: \"kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.411983 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.418376 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.458709 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76s9v\" (UniqueName: \"kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v\") pod \"nova-cell1-db-create-qw854\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.481891 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwc7k\" (UniqueName: \"kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.488221 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts\") pod \"nova-api-93c3-account-create-update-rpzrm\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.499929 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.537912 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.544565 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-df69-account-create-update-rns9n"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.547719 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.556733 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.607013 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.630917 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-df69-account-create-update-rns9n"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.649292 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.649400 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsckr\" (UniqueName: \"kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.664710 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0553-account-create-update-swsct"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.666937 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.669904 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.690398 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0553-account-create-update-swsct"] Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.752920 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkkh\" (UniqueName: \"kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.752995 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.753104 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.753181 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsckr\" (UniqueName: \"kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.754835 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:15 crc kubenswrapper[4896]: I0126 16:02:15.799546 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsckr\" (UniqueName: \"kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr\") pod \"nova-cell0-df69-account-create-update-rns9n\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.071461 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.088375 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkkh\" (UniqueName: \"kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.088464 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.090449 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.162162 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkkh\" (UniqueName: \"kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh\") pod \"nova-cell1-0553-account-create-update-swsct\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.282316 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.283087 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bb9fc"] Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.325637 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.447494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerStarted","Data":"a4ba90cf5a8bca5df0ca77c84774e8bf53c77765127b50d6c64486d1cbbbbbe9"} Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.459082 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bb9fc" event={"ID":"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c","Type":"ContainerStarted","Data":"3fb57813a30c203f9e7f924f9677127e01b8efb473fc44c03d86e2ef070d2a3f"} Jan 26 16:02:16 crc kubenswrapper[4896]: I0126 16:02:16.477951 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="8217b6eb-3002-43a0-a26e-55003835c995" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.229:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.177339 4896 generic.go:334] "Generic (PLEG): container finished" podID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerID="54f84ac432b72822fa3bca3258b874d0b0748c5abd656e9d98520dff062c7ac3" exitCode=0 Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.177903 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerDied","Data":"54f84ac432b72822fa3bca3258b874d0b0748c5abd656e9d98520dff062c7ac3"} Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.667716 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-93c3-account-create-update-rpzrm"] Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.679485 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-df69-account-create-update-rns9n"] Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.689839 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zmxcl"] Jan 26 16:02:18 crc kubenswrapper[4896]: W0126 16:02:18.765449 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc385b05e_d56d_4b07_8d3a_f96399936528.slice/crio-3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb WatchSource:0}: Error finding container 3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb: Status 404 returned error can't find the container with id 3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.783786 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qw854"] Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.814841 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.814891 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.814934 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.815483 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:02:18 crc kubenswrapper[4896]: I0126 16:02:18.815540 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" gracePeriod=600 Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.014194 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0553-account-create-update-swsct"] Jan 26 16:02:19 crc kubenswrapper[4896]: E0126 16:02:19.141699 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.180784 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.224910 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerStarted","Data":"1df237d1f89e97b4531d2cc854266c1a957d19fa7e42c3afddb28672df38fb57"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.243175 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qw854" event={"ID":"8531512a-bfdc-47ff-ae60-182536cad417","Type":"ContainerStarted","Data":"c338a32df502beb3dfd01b1b777b8acfa0c6773703a4d1bb8eb74906712a2486"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.275627 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" exitCode=0 Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.276051 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.276089 4896 scope.go:117] "RemoveContainer" containerID="31b4b66030421161e61a26b7176eb82897b0c0be510c967b21910fd56f2d129b" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.277115 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:02:19 crc kubenswrapper[4896]: E0126 16:02:19.283637 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.284602 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-93c3-account-create-update-rpzrm" event={"ID":"0a5714bb-8439-4dbc-974b-bf04c6537695","Type":"ContainerStarted","Data":"efa92aa9cb59c0b96182c24e26d35c9ae58c4f78b18da4159d6e6a3ec40e1966"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.337108 4896 generic.go:334] "Generic (PLEG): container finished" podID="6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" containerID="68e13b2d9bb3451f72cc0ec487ebe7fcb141cf8af1be009a85aad16281f1e20b" exitCode=0 Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.337913 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bb9fc" event={"ID":"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c","Type":"ContainerDied","Data":"68e13b2d9bb3451f72cc0ec487ebe7fcb141cf8af1be009a85aad16281f1e20b"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.346424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.346715 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.346803 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.347091 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.347205 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.347358 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.347459 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkbwq\" (UniqueName: \"kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.349555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zmxcl" event={"ID":"c385b05e-d56d-4b07-8d3a-f96399936528","Type":"ContainerStarted","Data":"3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.349918 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"d6ef4bfd-df0d-434c-b869-5890c7950600\" (UID: \"d6ef4bfd-df0d-434c-b869-5890c7950600\") " Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.350118 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs" (OuterVolumeSpecName: "logs") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.350430 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.351294 4896 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.351380 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ef4bfd-df0d-434c-b869-5890c7950600-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.375086 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.377819 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d6ef4bfd-df0d-434c-b869-5890c7950600","Type":"ContainerDied","Data":"3fd77197850c4a430f625fdf606735b91ab22c4f0b2cd06070c622166b4e5d52"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.388187 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts" (OuterVolumeSpecName: "scripts") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.400140 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq" (OuterVolumeSpecName: "kube-api-access-qkbwq") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "kube-api-access-qkbwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.475873 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8217b6eb-3002-43a0-a26e-55003835c995" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.229:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.459748 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df69-account-create-update-rns9n" event={"ID":"5925428c-5669-44dc-92f7-e182c113fb11","Type":"ContainerStarted","Data":"d0bf2227753b699ec58c9df45112a6235c27db2b12484b7ef82163d0d27eab65"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.575202 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0553-account-create-update-swsct" event={"ID":"3db41110-40d9-49ad-9ba5-a7e94e433693","Type":"ContainerStarted","Data":"fa05e4e33d5c1a26e9a9b1f205d30f5707378b4fd108c191917cc28cc99934a7"} Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.578525 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkbwq\" (UniqueName: \"kubernetes.io/projected/d6ef4bfd-df0d-434c-b869-5890c7950600-kube-api-access-qkbwq\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.578551 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.597825 4896 scope.go:117] "RemoveContainer" containerID="54f84ac432b72822fa3bca3258b874d0b0748c5abd656e9d98520dff062c7ac3" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.637093 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.676601 4896 scope.go:117] "RemoveContainer" containerID="b41bb5e603893cc3aa9c16e94a3c02abfb8c32fb59fca28fb1f3b05a94ec03b6" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.680123 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.817118 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.817737 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.913714 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.986969 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4" (OuterVolumeSpecName: "glance") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "pvc-032691af-a20f-4ded-a276-f85258d081f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:02:19 crc kubenswrapper[4896]: I0126 16:02:19.990413 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") on node \"crc\" " Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.343424 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.344395 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-032691af-a20f-4ded-a276-f85258d081f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4") on node "crc" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.387329 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data" (OuterVolumeSpecName: "config-data") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.410400 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.410441 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.416869 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d6ef4bfd-df0d-434c-b869-5890c7950600" (UID: "d6ef4bfd-df0d-434c-b869-5890c7950600"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.513041 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef4bfd-df0d-434c-b869-5890c7950600-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.536071 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zmxcl" event={"ID":"c385b05e-d56d-4b07-8d3a-f96399936528","Type":"ContainerStarted","Data":"a40c694d41194e0ff6ef2f4c4f88bd562f14e6c57b9ee7d7257e7bcd9e41c2bb"} Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.538350 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df69-account-create-update-rns9n" event={"ID":"5925428c-5669-44dc-92f7-e182c113fb11","Type":"ContainerStarted","Data":"f9a58465d086247e3ce30aac15fa075fdc17bf15810a29a7c425c83a2099413e"} Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.540441 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qw854" event={"ID":"8531512a-bfdc-47ff-ae60-182536cad417","Type":"ContainerStarted","Data":"b5a99a991f6040c9e0cac7f27ec3b92595a19e6aa8097a696c59e5370b3aeb5b"} Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.546002 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-93c3-account-create-update-rpzrm" event={"ID":"0a5714bb-8439-4dbc-974b-bf04c6537695","Type":"ContainerStarted","Data":"5944fcedd2dd4c50c8358ea854e84f386c5952064b900a4576dd43bab0b3adc7"} Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.561432 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zmxcl" podStartSLOduration=6.561413794 podStartE2EDuration="6.561413794s" podCreationTimestamp="2026-01-26 16:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:20.55445057 +0000 UTC m=+1698.336330973" watchObservedRunningTime="2026-01-26 16:02:20.561413794 +0000 UTC m=+1698.343294187" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.607350 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qw854" podStartSLOduration=6.607326576 podStartE2EDuration="6.607326576s" podCreationTimestamp="2026-01-26 16:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:20.58095159 +0000 UTC m=+1698.362831973" watchObservedRunningTime="2026-01-26 16:02:20.607326576 +0000 UTC m=+1698.389206969" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.625520 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-93c3-account-create-update-rpzrm" podStartSLOduration=6.625497254 podStartE2EDuration="6.625497254s" podCreationTimestamp="2026-01-26 16:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:20.614605569 +0000 UTC m=+1698.396485972" watchObservedRunningTime="2026-01-26 16:02:20.625497254 +0000 UTC m=+1698.407377647" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.678469 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-df69-account-create-update-rns9n" podStartSLOduration=5.678448503 podStartE2EDuration="5.678448503s" podCreationTimestamp="2026-01-26 16:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:20.630637818 +0000 UTC m=+1698.412518211" watchObservedRunningTime="2026-01-26 16:02:20.678448503 +0000 UTC m=+1698.460328896" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.729806 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.802861 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.831677 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:20 crc kubenswrapper[4896]: E0126 16:02:20.832350 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-httpd" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.832369 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-httpd" Jan 26 16:02:20 crc kubenswrapper[4896]: E0126 16:02:20.832402 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-log" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.832408 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-log" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.832661 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-httpd" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.832680 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" containerName="glance-log" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.834155 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.839793 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.839904 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 16:02:20 crc kubenswrapper[4896]: I0126 16:02:20.844762 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027628 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027741 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027777 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027840 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027866 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027919 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9scjq\" (UniqueName: \"kubernetes.io/projected/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-kube-api-access-9scjq\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.027944 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.028021 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.129765 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.129840 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.129916 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.129950 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.130014 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.130042 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.130095 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9scjq\" (UniqueName: \"kubernetes.io/projected/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-kube-api-access-9scjq\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.130119 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.131765 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.132066 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-logs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.146459 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.148076 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.150437 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.150890 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.150931 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6ee5c5ace645a0437237edccec1152ed0b5c152b57bef8f765a8fb7bcea3897/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.158436 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.164664 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9scjq\" (UniqueName: \"kubernetes.io/projected/4c1c45d1-a81c-4b0d-b5ba-cac9e8704701-kube-api-access-9scjq\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.486788 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="8217b6eb-3002-43a0-a26e-55003835c995" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.229:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.575275 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-032691af-a20f-4ded-a276-f85258d081f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-032691af-a20f-4ded-a276-f85258d081f4\") pod \"glance-default-external-api-0\" (UID: \"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701\") " pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.595239 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bb9fc" event={"ID":"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c","Type":"ContainerDied","Data":"3fb57813a30c203f9e7f924f9677127e01b8efb473fc44c03d86e2ef070d2a3f"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.595293 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb57813a30c203f9e7f924f9677127e01b8efb473fc44c03d86e2ef070d2a3f" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.612204 4896 generic.go:334] "Generic (PLEG): container finished" podID="c385b05e-d56d-4b07-8d3a-f96399936528" containerID="a40c694d41194e0ff6ef2f4c4f88bd562f14e6c57b9ee7d7257e7bcd9e41c2bb" exitCode=0 Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.612297 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zmxcl" event={"ID":"c385b05e-d56d-4b07-8d3a-f96399936528","Type":"ContainerDied","Data":"a40c694d41194e0ff6ef2f4c4f88bd562f14e6c57b9ee7d7257e7bcd9e41c2bb"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.616437 4896 generic.go:334] "Generic (PLEG): container finished" podID="5925428c-5669-44dc-92f7-e182c113fb11" containerID="f9a58465d086247e3ce30aac15fa075fdc17bf15810a29a7c425c83a2099413e" exitCode=0 Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.616515 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df69-account-create-update-rns9n" event={"ID":"5925428c-5669-44dc-92f7-e182c113fb11","Type":"ContainerDied","Data":"f9a58465d086247e3ce30aac15fa075fdc17bf15810a29a7c425c83a2099413e"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.628812 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0553-account-create-update-swsct" event={"ID":"3db41110-40d9-49ad-9ba5-a7e94e433693","Type":"ContainerStarted","Data":"8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.647073 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerStarted","Data":"e0ca641dcca7c18bda9e0a084fa4137604f8a7d74af6b3f26b5c751a9b44ff23"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.648152 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.666786 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.666936 4896 generic.go:334] "Generic (PLEG): container finished" podID="8531512a-bfdc-47ff-ae60-182536cad417" containerID="b5a99a991f6040c9e0cac7f27ec3b92595a19e6aa8097a696c59e5370b3aeb5b" exitCode=0 Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.667016 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qw854" event={"ID":"8531512a-bfdc-47ff-ae60-182536cad417","Type":"ContainerDied","Data":"b5a99a991f6040c9e0cac7f27ec3b92595a19e6aa8097a696c59e5370b3aeb5b"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.676810 4896 generic.go:334] "Generic (PLEG): container finished" podID="0a5714bb-8439-4dbc-974b-bf04c6537695" containerID="5944fcedd2dd4c50c8358ea854e84f386c5952064b900a4576dd43bab0b3adc7" exitCode=0 Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.676864 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-93c3-account-create-update-rpzrm" event={"ID":"0a5714bb-8439-4dbc-974b-bf04c6537695","Type":"ContainerDied","Data":"5944fcedd2dd4c50c8358ea854e84f386c5952064b900a4576dd43bab0b3adc7"} Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.743890 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0553-account-create-update-swsct" podStartSLOduration=6.743868173 podStartE2EDuration="6.743868173s" podCreationTimestamp="2026-01-26 16:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:21.687638443 +0000 UTC m=+1699.469518826" watchObservedRunningTime="2026-01-26 16:02:21.743868173 +0000 UTC m=+1699.525748566" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.754623 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4qbd\" (UniqueName: \"kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd\") pod \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.754879 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts\") pod \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\" (UID: \"6e6e5a4d-74d6-414a-ace8-b322f12e7e4c\") " Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.758471 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" (UID: "6e6e5a4d-74d6-414a-ace8-b322f12e7e4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.768401 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.779134 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd" (OuterVolumeSpecName: "kube-api-access-x4qbd") pod "6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" (UID: "6e6e5a4d-74d6-414a-ace8-b322f12e7e4c"). InnerVolumeSpecName "kube-api-access-x4qbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.843770 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.596595689 podStartE2EDuration="9.843746921s" podCreationTimestamp="2026-01-26 16:02:12 +0000 UTC" firstStartedPulling="2026-01-26 16:02:13.524664354 +0000 UTC m=+1691.306544747" lastFinishedPulling="2026-01-26 16:02:19.771815596 +0000 UTC m=+1697.553695979" observedRunningTime="2026-01-26 16:02:21.730481469 +0000 UTC m=+1699.512361862" watchObservedRunningTime="2026-01-26 16:02:21.843746921 +0000 UTC m=+1699.625627314" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.875060 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4qbd\" (UniqueName: \"kubernetes.io/projected/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-kube-api-access-x4qbd\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:21 crc kubenswrapper[4896]: I0126 16:02:21.875099 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:21 crc kubenswrapper[4896]: E0126 16:02:21.919361 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5925428c_5669_44dc_92f7_e182c113fb11.slice/crio-f9a58465d086247e3ce30aac15fa075fdc17bf15810a29a7c425c83a2099413e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db41110_40d9_49ad_9ba5_a7e94e433693.slice/crio-8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8531512a_bfdc_47ff_ae60_182536cad417.slice/crio-conmon-b5a99a991f6040c9e0cac7f27ec3b92595a19e6aa8097a696c59e5370b3aeb5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db41110_40d9_49ad_9ba5_a7e94e433693.slice/crio-conmon-8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5714bb_8439_4dbc_974b_bf04c6537695.slice/crio-5944fcedd2dd4c50c8358ea854e84f386c5952064b900a4576dd43bab0b3adc7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.723647 4896 generic.go:334] "Generic (PLEG): container finished" podID="3db41110-40d9-49ad-9ba5-a7e94e433693" containerID="8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172" exitCode=0 Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.724100 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0553-account-create-update-swsct" event={"ID":"3db41110-40d9-49ad-9ba5-a7e94e433693","Type":"ContainerDied","Data":"8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172"} Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.735375 4896 generic.go:334] "Generic (PLEG): container finished" podID="9be68e33-e343-492f-913b-098163a26f87" containerID="66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" exitCode=0 Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.736347 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f79bf9b96-md4vg" event={"ID":"9be68e33-e343-492f-913b-098163a26f87","Type":"ContainerDied","Data":"66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe"} Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.736651 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bb9fc" Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.819460 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ef4bfd-df0d-434c-b869-5890c7950600" path="/var/lib/kubelet/pods/d6ef4bfd-df0d-434c-b869-5890c7950600/volumes" Jan 26 16:02:22 crc kubenswrapper[4896]: I0126 16:02:22.841847 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 16:02:22 crc kubenswrapper[4896]: W0126 16:02:22.862728 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c1c45d1_a81c_4b0d_b5ba_cac9e8704701.slice/crio-be5d881e225f307f1d6698f4fc84a77533d5798e0640efb56fe5b1735ea5b5fd WatchSource:0}: Error finding container be5d881e225f307f1d6698f4fc84a77533d5798e0640efb56fe5b1735ea5b5fd: Status 404 returned error can't find the container with id be5d881e225f307f1d6698f4fc84a77533d5798e0640efb56fe5b1735ea5b5fd Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.538020 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.716495 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle\") pod \"9be68e33-e343-492f-913b-098163a26f87\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.716718 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fslqv\" (UniqueName: \"kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv\") pod \"9be68e33-e343-492f-913b-098163a26f87\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.716788 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom\") pod \"9be68e33-e343-492f-913b-098163a26f87\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.716871 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data\") pod \"9be68e33-e343-492f-913b-098163a26f87\" (UID: \"9be68e33-e343-492f-913b-098163a26f87\") " Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.758953 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv" (OuterVolumeSpecName: "kube-api-access-fslqv") pod "9be68e33-e343-492f-913b-098163a26f87" (UID: "9be68e33-e343-492f-913b-098163a26f87"). InnerVolumeSpecName "kube-api-access-fslqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.759295 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9be68e33-e343-492f-913b-098163a26f87" (UID: "9be68e33-e343-492f-913b-098163a26f87"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.794413 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be68e33-e343-492f-913b-098163a26f87" (UID: "9be68e33-e343-492f-913b-098163a26f87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.826202 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fslqv\" (UniqueName: \"kubernetes.io/projected/9be68e33-e343-492f-913b-098163a26f87-kube-api-access-fslqv\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.826249 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.826258 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.839240 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f79bf9b96-md4vg" event={"ID":"9be68e33-e343-492f-913b-098163a26f87","Type":"ContainerDied","Data":"6917f295999df01b05816faceb29cc83eb550aeb2e03abb93db349eb1eaf31bb"} Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.839312 4896 scope.go:117] "RemoveContainer" containerID="66eeae89b1c85b83f59405699e8b43af4213e2d6b1aaa4676af29b96f8f57afe" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.839513 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f79bf9b96-md4vg" Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.859191 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701","Type":"ContainerStarted","Data":"be5d881e225f307f1d6698f4fc84a77533d5798e0640efb56fe5b1735ea5b5fd"} Jan 26 16:02:23 crc kubenswrapper[4896]: I0126 16:02:23.983068 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data" (OuterVolumeSpecName: "config-data") pod "9be68e33-e343-492f-913b-098163a26f87" (UID: "9be68e33-e343-492f-913b-098163a26f87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.033791 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be68e33-e343-492f-913b-098163a26f87-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.067233 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.148722 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts\") pod \"0a5714bb-8439-4dbc-974b-bf04c6537695\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.148947 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwc7k\" (UniqueName: \"kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k\") pod \"0a5714bb-8439-4dbc-974b-bf04c6537695\" (UID: \"0a5714bb-8439-4dbc-974b-bf04c6537695\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.150341 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a5714bb-8439-4dbc-974b-bf04c6537695" (UID: "0a5714bb-8439-4dbc-974b-bf04c6537695"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.174879 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k" (OuterVolumeSpecName: "kube-api-access-cwc7k") pod "0a5714bb-8439-4dbc-974b-bf04c6537695" (UID: "0a5714bb-8439-4dbc-974b-bf04c6537695"). InnerVolumeSpecName "kube-api-access-cwc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.252450 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwc7k\" (UniqueName: \"kubernetes.io/projected/0a5714bb-8439-4dbc-974b-bf04c6537695-kube-api-access-cwc7k\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.252480 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5714bb-8439-4dbc-974b-bf04c6537695-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.271205 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.300697 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6f79bf9b96-md4vg"] Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.324987 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.366356 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.391012 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.457435 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts\") pod \"5925428c-5669-44dc-92f7-e182c113fb11\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.457775 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts\") pod \"8531512a-bfdc-47ff-ae60-182536cad417\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.457819 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76s9v\" (UniqueName: \"kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v\") pod \"8531512a-bfdc-47ff-ae60-182536cad417\" (UID: \"8531512a-bfdc-47ff-ae60-182536cad417\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.457888 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsckr\" (UniqueName: \"kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr\") pod \"5925428c-5669-44dc-92f7-e182c113fb11\" (UID: \"5925428c-5669-44dc-92f7-e182c113fb11\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.457967 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb44j\" (UniqueName: \"kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j\") pod \"c385b05e-d56d-4b07-8d3a-f96399936528\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.458009 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts\") pod \"c385b05e-d56d-4b07-8d3a-f96399936528\" (UID: \"c385b05e-d56d-4b07-8d3a-f96399936528\") " Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.461080 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5925428c-5669-44dc-92f7-e182c113fb11" (UID: "5925428c-5669-44dc-92f7-e182c113fb11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.461665 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8531512a-bfdc-47ff-ae60-182536cad417" (UID: "8531512a-bfdc-47ff-ae60-182536cad417"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.466496 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c385b05e-d56d-4b07-8d3a-f96399936528" (UID: "c385b05e-d56d-4b07-8d3a-f96399936528"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.469890 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j" (OuterVolumeSpecName: "kube-api-access-wb44j") pod "c385b05e-d56d-4b07-8d3a-f96399936528" (UID: "c385b05e-d56d-4b07-8d3a-f96399936528"). InnerVolumeSpecName "kube-api-access-wb44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.470115 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v" (OuterVolumeSpecName: "kube-api-access-76s9v") pod "8531512a-bfdc-47ff-ae60-182536cad417" (UID: "8531512a-bfdc-47ff-ae60-182536cad417"). InnerVolumeSpecName "kube-api-access-76s9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.471009 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr" (OuterVolumeSpecName: "kube-api-access-gsckr") pod "5925428c-5669-44dc-92f7-e182c113fb11" (UID: "5925428c-5669-44dc-92f7-e182c113fb11"). InnerVolumeSpecName "kube-api-access-gsckr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571104 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb44j\" (UniqueName: \"kubernetes.io/projected/c385b05e-d56d-4b07-8d3a-f96399936528-kube-api-access-wb44j\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571142 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c385b05e-d56d-4b07-8d3a-f96399936528-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571156 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5925428c-5669-44dc-92f7-e182c113fb11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571168 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8531512a-bfdc-47ff-ae60-182536cad417-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571205 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76s9v\" (UniqueName: \"kubernetes.io/projected/8531512a-bfdc-47ff-ae60-182536cad417-kube-api-access-76s9v\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:24 crc kubenswrapper[4896]: I0126 16:02:24.571218 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsckr\" (UniqueName: \"kubernetes.io/projected/5925428c-5669-44dc-92f7-e182c113fb11-kube-api-access-gsckr\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.140245 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be68e33-e343-492f-913b-098163a26f87" path="/var/lib/kubelet/pods/9be68e33-e343-492f-913b-098163a26f87/volumes" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.149105 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-93c3-account-create-update-rpzrm" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.165416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-93c3-account-create-update-rpzrm" event={"ID":"0a5714bb-8439-4dbc-974b-bf04c6537695","Type":"ContainerDied","Data":"efa92aa9cb59c0b96182c24e26d35c9ae58c4f78b18da4159d6e6a3ec40e1966"} Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.165461 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa92aa9cb59c0b96182c24e26d35c9ae58c4f78b18da4159d6e6a3ec40e1966" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.194655 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701","Type":"ContainerStarted","Data":"6956b920f9f87ba7fd24529fad38bcbd0d902dc8ff2a3c67c8370ea60a47fd6f"} Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.233988 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zmxcl" event={"ID":"c385b05e-d56d-4b07-8d3a-f96399936528","Type":"ContainerDied","Data":"3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb"} Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.234039 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8de52ae4e7ee17f1c20ab450a3c4555539b51f2d07ba1f3e5a8c6419b531fb" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.234127 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zmxcl" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.318930 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df69-account-create-update-rns9n" event={"ID":"5925428c-5669-44dc-92f7-e182c113fb11","Type":"ContainerDied","Data":"d0bf2227753b699ec58c9df45112a6235c27db2b12484b7ef82163d0d27eab65"} Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.319263 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0bf2227753b699ec58c9df45112a6235c27db2b12484b7ef82163d0d27eab65" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.318952 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df69-account-create-update-rns9n" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.337683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qw854" event={"ID":"8531512a-bfdc-47ff-ae60-182536cad417","Type":"ContainerDied","Data":"c338a32df502beb3dfd01b1b777b8acfa0c6773703a4d1bb8eb74906712a2486"} Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.337757 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c338a32df502beb3dfd01b1b777b8acfa0c6773703a4d1bb8eb74906712a2486" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.337873 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qw854" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.347156 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.455416 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhkkh\" (UniqueName: \"kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh\") pod \"3db41110-40d9-49ad-9ba5-a7e94e433693\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.455601 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts\") pod \"3db41110-40d9-49ad-9ba5-a7e94e433693\" (UID: \"3db41110-40d9-49ad-9ba5-a7e94e433693\") " Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.456995 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3db41110-40d9-49ad-9ba5-a7e94e433693" (UID: "3db41110-40d9-49ad-9ba5-a7e94e433693"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.472398 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh" (OuterVolumeSpecName: "kube-api-access-bhkkh") pod "3db41110-40d9-49ad-9ba5-a7e94e433693" (UID: "3db41110-40d9-49ad-9ba5-a7e94e433693"). InnerVolumeSpecName "kube-api-access-bhkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.560560 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhkkh\" (UniqueName: \"kubernetes.io/projected/3db41110-40d9-49ad-9ba5-a7e94e433693-kube-api-access-bhkkh\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.560619 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db41110-40d9-49ad-9ba5-a7e94e433693-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.631722 4896 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:02:25 crc kubenswrapper[4896]: I0126 16:02:25.631790 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:02:26 crc kubenswrapper[4896]: I0126 16:02:26.457452 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c1c45d1-a81c-4b0d-b5ba-cac9e8704701","Type":"ContainerStarted","Data":"718de155fcb8f88bd03dfe5becc80991399a4921e2b66a7747b13223c7a4b6e6"} Jan 26 16:02:26 crc kubenswrapper[4896]: I0126 16:02:26.467304 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0553-account-create-update-swsct" event={"ID":"3db41110-40d9-49ad-9ba5-a7e94e433693","Type":"ContainerDied","Data":"fa05e4e33d5c1a26e9a9b1f205d30f5707378b4fd108c191917cc28cc99934a7"} Jan 26 16:02:26 crc kubenswrapper[4896]: I0126 16:02:26.467349 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa05e4e33d5c1a26e9a9b1f205d30f5707378b4fd108c191917cc28cc99934a7" Jan 26 16:02:26 crc kubenswrapper[4896]: I0126 16:02:26.467415 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0553-account-create-update-swsct" Jan 26 16:02:26 crc kubenswrapper[4896]: I0126 16:02:26.585208 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.585171299 podStartE2EDuration="6.585171299s" podCreationTimestamp="2026-01-26 16:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:02:26.483162005 +0000 UTC m=+1704.265042398" watchObservedRunningTime="2026-01-26 16:02:26.585171299 +0000 UTC m=+1704.367051702" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.727499 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m5nn6"] Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.728927 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8531512a-bfdc-47ff-ae60-182536cad417" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.728945 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8531512a-bfdc-47ff-ae60-182536cad417" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.728956 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c385b05e-d56d-4b07-8d3a-f96399936528" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.728964 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c385b05e-d56d-4b07-8d3a-f96399936528" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.728990 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5925428c-5669-44dc-92f7-e182c113fb11" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729000 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5925428c-5669-44dc-92f7-e182c113fb11" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.729034 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729042 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.729054 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5714bb-8439-4dbc-974b-bf04c6537695" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729062 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5714bb-8439-4dbc-974b-bf04c6537695" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.729076 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be68e33-e343-492f-913b-098163a26f87" containerName="heat-engine" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729084 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be68e33-e343-492f-913b-098163a26f87" containerName="heat-engine" Jan 26 16:02:30 crc kubenswrapper[4896]: E0126 16:02:30.729102 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db41110-40d9-49ad-9ba5-a7e94e433693" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729109 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db41110-40d9-49ad-9ba5-a7e94e433693" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729388 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db41110-40d9-49ad-9ba5-a7e94e433693" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729410 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5925428c-5669-44dc-92f7-e182c113fb11" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729431 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5714bb-8439-4dbc-974b-bf04c6537695" containerName="mariadb-account-create-update" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729455 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c385b05e-d56d-4b07-8d3a-f96399936528" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729466 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729478 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8531512a-bfdc-47ff-ae60-182536cad417" containerName="mariadb-database-create" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.729497 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be68e33-e343-492f-913b-098163a26f87" containerName="heat-engine" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.731636 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.745245 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.766863 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.767463 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fb2z7" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.770523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.770676 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.770777 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.770819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjb9\" (UniqueName: \"kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.807692 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m5nn6"] Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.871833 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.871894 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjb9\" (UniqueName: \"kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.871982 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.872057 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.877940 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.877952 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.881094 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:30 crc kubenswrapper[4896]: I0126 16:02:30.894392 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjb9\" (UniqueName: \"kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9\") pod \"nova-cell0-conductor-db-sync-m5nn6\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:31 crc kubenswrapper[4896]: I0126 16:02:31.065540 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:31 crc kubenswrapper[4896]: I0126 16:02:31.769263 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:02:31 crc kubenswrapper[4896]: I0126 16:02:31.769767 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 16:02:31 crc kubenswrapper[4896]: I0126 16:02:31.821109 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:02:31 crc kubenswrapper[4896]: I0126 16:02:31.830462 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 16:02:32 crc kubenswrapper[4896]: W0126 16:02:32.002664 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff0b31b_d778_4b0c_aa5a_c8b42d08e462.slice/crio-221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294 WatchSource:0}: Error finding container 221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294: Status 404 returned error can't find the container with id 221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294 Jan 26 16:02:32 crc kubenswrapper[4896]: I0126 16:02:32.008735 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m5nn6"] Jan 26 16:02:32 crc kubenswrapper[4896]: I0126 16:02:32.564169 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" event={"ID":"aff0b31b-d778-4b0c-aa5a-c8b42d08e462","Type":"ContainerStarted","Data":"221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294"} Jan 26 16:02:32 crc kubenswrapper[4896]: I0126 16:02:32.564559 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:02:32 crc kubenswrapper[4896]: I0126 16:02:32.564592 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 16:02:33 crc kubenswrapper[4896]: I0126 16:02:33.764884 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:02:33 crc kubenswrapper[4896]: E0126 16:02:33.765675 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:02:35 crc kubenswrapper[4896]: I0126 16:02:35.135161 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:02:35 crc kubenswrapper[4896]: I0126 16:02:35.135281 4896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:02:35 crc kubenswrapper[4896]: I0126 16:02:35.375288 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.817781 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.818704 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-central-agent" containerID="cri-o://97da7c66fb1cefd52174b393ae0e840fd46b93e14fbd5856a8ec788441018366" gracePeriod=30 Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.818873 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" containerID="cri-o://e0ca641dcca7c18bda9e0a084fa4137604f8a7d74af6b3f26b5c751a9b44ff23" gracePeriod=30 Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.818922 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="sg-core" containerID="cri-o://1df237d1f89e97b4531d2cc854266c1a957d19fa7e42c3afddb28672df38fb57" gracePeriod=30 Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.818962 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-notification-agent" containerID="cri-o://a4ba90cf5a8bca5df0ca77c84774e8bf53c77765127b50d6c64486d1cbbbbbe9" gracePeriod=30 Jan 26 16:02:41 crc kubenswrapper[4896]: I0126 16:02:41.923945 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.231:3000/\": read tcp 10.217.0.2:52232->10.217.0.231:3000: read: connection reset by peer" Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.750916 4896 generic.go:334] "Generic (PLEG): container finished" podID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerID="e0ca641dcca7c18bda9e0a084fa4137604f8a7d74af6b3f26b5c751a9b44ff23" exitCode=0 Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.751179 4896 generic.go:334] "Generic (PLEG): container finished" podID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerID="1df237d1f89e97b4531d2cc854266c1a957d19fa7e42c3afddb28672df38fb57" exitCode=2 Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.751188 4896 generic.go:334] "Generic (PLEG): container finished" podID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerID="97da7c66fb1cefd52174b393ae0e840fd46b93e14fbd5856a8ec788441018366" exitCode=0 Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.751021 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerDied","Data":"e0ca641dcca7c18bda9e0a084fa4137604f8a7d74af6b3f26b5c751a9b44ff23"} Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.751221 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerDied","Data":"1df237d1f89e97b4531d2cc854266c1a957d19fa7e42c3afddb28672df38fb57"} Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.751234 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerDied","Data":"97da7c66fb1cefd52174b393ae0e840fd46b93e14fbd5856a8ec788441018366"} Jan 26 16:02:42 crc kubenswrapper[4896]: I0126 16:02:42.802721 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.231:3000/\": dial tcp 10.217.0.231:3000: connect: connection refused" Jan 26 16:02:44 crc kubenswrapper[4896]: I0126 16:02:44.790856 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" event={"ID":"aff0b31b-d778-4b0c-aa5a-c8b42d08e462","Type":"ContainerStarted","Data":"85efda1bab1bcebe3dde1ed1984a3d6aaf3b8b090dadceebcc9561c50676df3c"} Jan 26 16:02:44 crc kubenswrapper[4896]: I0126 16:02:44.817310 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" podStartSLOduration=3.205358799 podStartE2EDuration="14.817294616s" podCreationTimestamp="2026-01-26 16:02:30 +0000 UTC" firstStartedPulling="2026-01-26 16:02:32.004697803 +0000 UTC m=+1709.786578196" lastFinishedPulling="2026-01-26 16:02:43.61663362 +0000 UTC m=+1721.398514013" observedRunningTime="2026-01-26 16:02:44.807607525 +0000 UTC m=+1722.589487918" watchObservedRunningTime="2026-01-26 16:02:44.817294616 +0000 UTC m=+1722.599175009" Jan 26 16:02:46 crc kubenswrapper[4896]: I0126 16:02:46.821407 4896 generic.go:334] "Generic (PLEG): container finished" podID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerID="a4ba90cf5a8bca5df0ca77c84774e8bf53c77765127b50d6c64486d1cbbbbbe9" exitCode=0 Jan 26 16:02:46 crc kubenswrapper[4896]: I0126 16:02:46.821628 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerDied","Data":"a4ba90cf5a8bca5df0ca77c84774e8bf53c77765127b50d6c64486d1cbbbbbe9"} Jan 26 16:02:46 crc kubenswrapper[4896]: I0126 16:02:46.821880 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fb76b00-4d78-4fe3-a01a-78b4622040e4","Type":"ContainerDied","Data":"bd209784af1624408418ecfd34f1bc7a28edc355c93b3bfee70f99f56e615c89"} Jan 26 16:02:46 crc kubenswrapper[4896]: I0126 16:02:46.821901 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd209784af1624408418ecfd34f1bc7a28edc355c93b3bfee70f99f56e615c89" Jan 26 16:02:46 crc kubenswrapper[4896]: I0126 16:02:46.914342 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017137 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcq65\" (UniqueName: \"kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017230 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017334 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017496 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017524 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017625 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017658 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml\") pod \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\" (UID: \"5fb76b00-4d78-4fe3-a01a-78b4622040e4\") " Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.017980 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.018364 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.018784 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.018810 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fb76b00-4d78-4fe3-a01a-78b4622040e4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.022613 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65" (OuterVolumeSpecName: "kube-api-access-tcq65") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "kube-api-access-tcq65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.023768 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts" (OuterVolumeSpecName: "scripts") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.051750 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.121499 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.121537 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.121554 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcq65\" (UniqueName: \"kubernetes.io/projected/5fb76b00-4d78-4fe3-a01a-78b4622040e4-kube-api-access-tcq65\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.128901 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.172466 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data" (OuterVolumeSpecName: "config-data") pod "5fb76b00-4d78-4fe3-a01a-78b4622040e4" (UID: "5fb76b00-4d78-4fe3-a01a-78b4622040e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.224076 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.224119 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb76b00-4d78-4fe3-a01a-78b4622040e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.759389 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:02:47 crc kubenswrapper[4896]: E0126 16:02:47.759799 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.831530 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.889870 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.905223 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.933176 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:47 crc kubenswrapper[4896]: E0126 16:02:47.934910 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-central-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.934933 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-central-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: E0126 16:02:47.934957 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="sg-core" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.934965 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="sg-core" Jan 26 16:02:47 crc kubenswrapper[4896]: E0126 16:02:47.934982 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-notification-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.934992 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-notification-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: E0126 16:02:47.935007 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.935014 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.935298 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-notification-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.935324 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="ceilometer-central-agent" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.935338 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="sg-core" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.935979 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" containerName="proxy-httpd" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.940157 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.944600 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.945133 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:02:47 crc kubenswrapper[4896]: I0126 16:02:47.945389 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.040890 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.040963 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5ff\" (UniqueName: \"kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.041180 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.041431 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.041513 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.041553 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.041673 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.145740 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.145796 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5ff\" (UniqueName: \"kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.145854 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.145945 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.145991 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.146015 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.146061 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.146270 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.146817 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.151558 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.153332 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.156143 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.161272 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.177676 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b5ff\" (UniqueName: \"kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff\") pod \"ceilometer-0\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " pod="openstack/ceilometer-0" Jan 26 16:02:48 crc kubenswrapper[4896]: I0126 16:02:48.267711 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:02:49 crc kubenswrapper[4896]: I0126 16:02:48.775532 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb76b00-4d78-4fe3-a01a-78b4622040e4" path="/var/lib/kubelet/pods/5fb76b00-4d78-4fe3-a01a-78b4622040e4/volumes" Jan 26 16:02:49 crc kubenswrapper[4896]: I0126 16:02:48.885749 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:02:49 crc kubenswrapper[4896]: W0126 16:02:48.885943 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d0aa4fb_f613_494e_9714_e42d73efd11f.slice/crio-239c53f207354cbc72122787da2af18b8c5ac865fb74e5fd43944905388a6863 WatchSource:0}: Error finding container 239c53f207354cbc72122787da2af18b8c5ac865fb74e5fd43944905388a6863: Status 404 returned error can't find the container with id 239c53f207354cbc72122787da2af18b8c5ac865fb74e5fd43944905388a6863 Jan 26 16:02:49 crc kubenswrapper[4896]: I0126 16:02:49.876250 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerStarted","Data":"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7"} Jan 26 16:02:49 crc kubenswrapper[4896]: I0126 16:02:49.876860 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerStarted","Data":"239c53f207354cbc72122787da2af18b8c5ac865fb74e5fd43944905388a6863"} Jan 26 16:02:50 crc kubenswrapper[4896]: I0126 16:02:50.891407 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerStarted","Data":"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0"} Jan 26 16:02:51 crc kubenswrapper[4896]: I0126 16:02:51.906524 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerStarted","Data":"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe"} Jan 26 16:02:53 crc kubenswrapper[4896]: I0126 16:02:53.930558 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerStarted","Data":"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2"} Jan 26 16:02:53 crc kubenswrapper[4896]: I0126 16:02:53.931190 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:02:53 crc kubenswrapper[4896]: I0126 16:02:53.973137 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.790020332 podStartE2EDuration="6.973110658s" podCreationTimestamp="2026-01-26 16:02:47 +0000 UTC" firstStartedPulling="2026-01-26 16:02:48.903794485 +0000 UTC m=+1726.685674878" lastFinishedPulling="2026-01-26 16:02:53.086884811 +0000 UTC m=+1730.868765204" observedRunningTime="2026-01-26 16:02:53.955778164 +0000 UTC m=+1731.737658557" watchObservedRunningTime="2026-01-26 16:02:53.973110658 +0000 UTC m=+1731.754991061" Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.865047 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.877487 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.990890 4896 generic.go:334] "Generic (PLEG): container finished" podID="1b2e158c-acf0-4642-b31b-4db17087c69c" containerID="e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96" exitCode=137 Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.991064 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.992236 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" event={"ID":"1b2e158c-acf0-4642-b31b-4db17087c69c","Type":"ContainerDied","Data":"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96"} Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.992306 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-545f7c69fd-hm6nm" event={"ID":"1b2e158c-acf0-4642-b31b-4db17087c69c","Type":"ContainerDied","Data":"6456233a6aec9f0ae195c0d47e3198982a6840eebd5ad6900c5b532536add0c4"} Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.992333 4896 scope.go:117] "RemoveContainer" containerID="e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96" Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.995933 4896 generic.go:334] "Generic (PLEG): container finished" podID="aff0b31b-d778-4b0c-aa5a-c8b42d08e462" containerID="85efda1bab1bcebe3dde1ed1984a3d6aaf3b8b090dadceebcc9561c50676df3c" exitCode=0 Jan 26 16:02:57 crc kubenswrapper[4896]: I0126 16:02:57.996054 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" event={"ID":"aff0b31b-d778-4b0c-aa5a-c8b42d08e462","Type":"ContainerDied","Data":"85efda1bab1bcebe3dde1ed1984a3d6aaf3b8b090dadceebcc9561c50676df3c"} Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.000886 4896 generic.go:334] "Generic (PLEG): container finished" podID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" containerID="e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5" exitCode=137 Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.000939 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75c68767d8-c7w2z" event={"ID":"38c3367b-9b2c-40a4-841d-7b815bbfd45a","Type":"ContainerDied","Data":"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5"} Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.000967 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75c68767d8-c7w2z" event={"ID":"38c3367b-9b2c-40a4-841d-7b815bbfd45a","Type":"ContainerDied","Data":"663245645bec25efbc5c91a8acbfefee4b0984c7d1018da121e92d8472680457"} Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.001016 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75c68767d8-c7w2z" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.025903 4896 scope.go:117] "RemoveContainer" containerID="e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96" Jan 26 16:02:58 crc kubenswrapper[4896]: E0126 16:02:58.026439 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96\": container with ID starting with e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96 not found: ID does not exist" containerID="e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.026483 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96"} err="failed to get container status \"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96\": rpc error: code = NotFound desc = could not find container \"e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96\": container with ID starting with e3311969d3540617ba35ef73fe7be5873b45e60d962fdd7336c2a7513da28e96 not found: ID does not exist" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.026508 4896 scope.go:117] "RemoveContainer" containerID="e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043121 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom\") pod \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043207 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv6tr\" (UniqueName: \"kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr\") pod \"1b2e158c-acf0-4642-b31b-4db17087c69c\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043238 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data\") pod \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043446 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rtdv\" (UniqueName: \"kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv\") pod \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043536 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data\") pod \"1b2e158c-acf0-4642-b31b-4db17087c69c\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043738 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle\") pod \"1b2e158c-acf0-4642-b31b-4db17087c69c\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043810 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom\") pod \"1b2e158c-acf0-4642-b31b-4db17087c69c\" (UID: \"1b2e158c-acf0-4642-b31b-4db17087c69c\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.043857 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle\") pod \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\" (UID: \"38c3367b-9b2c-40a4-841d-7b815bbfd45a\") " Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.049374 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv" (OuterVolumeSpecName: "kube-api-access-4rtdv") pod "38c3367b-9b2c-40a4-841d-7b815bbfd45a" (UID: "38c3367b-9b2c-40a4-841d-7b815bbfd45a"). InnerVolumeSpecName "kube-api-access-4rtdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.051212 4896 scope.go:117] "RemoveContainer" containerID="e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5" Jan 26 16:02:58 crc kubenswrapper[4896]: E0126 16:02:58.051787 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5\": container with ID starting with e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5 not found: ID does not exist" containerID="e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.051836 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5"} err="failed to get container status \"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5\": rpc error: code = NotFound desc = could not find container \"e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5\": container with ID starting with e4aa0ca0ef5f297b565aa96482f05639744972ffb3541d600bc62528a3eb67d5 not found: ID does not exist" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.052806 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38c3367b-9b2c-40a4-841d-7b815bbfd45a" (UID: "38c3367b-9b2c-40a4-841d-7b815bbfd45a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.056418 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr" (OuterVolumeSpecName: "kube-api-access-kv6tr") pod "1b2e158c-acf0-4642-b31b-4db17087c69c" (UID: "1b2e158c-acf0-4642-b31b-4db17087c69c"). InnerVolumeSpecName "kube-api-access-kv6tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.057353 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b2e158c-acf0-4642-b31b-4db17087c69c" (UID: "1b2e158c-acf0-4642-b31b-4db17087c69c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.085709 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b2e158c-acf0-4642-b31b-4db17087c69c" (UID: "1b2e158c-acf0-4642-b31b-4db17087c69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.087130 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38c3367b-9b2c-40a4-841d-7b815bbfd45a" (UID: "38c3367b-9b2c-40a4-841d-7b815bbfd45a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.106247 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data" (OuterVolumeSpecName: "config-data") pod "38c3367b-9b2c-40a4-841d-7b815bbfd45a" (UID: "38c3367b-9b2c-40a4-841d-7b815bbfd45a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.122874 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data" (OuterVolumeSpecName: "config-data") pod "1b2e158c-acf0-4642-b31b-4db17087c69c" (UID: "1b2e158c-acf0-4642-b31b-4db17087c69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147236 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rtdv\" (UniqueName: \"kubernetes.io/projected/38c3367b-9b2c-40a4-841d-7b815bbfd45a-kube-api-access-4rtdv\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147288 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147301 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147312 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2e158c-acf0-4642-b31b-4db17087c69c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147323 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147335 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147346 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv6tr\" (UniqueName: \"kubernetes.io/projected/1b2e158c-acf0-4642-b31b-4db17087c69c-kube-api-access-kv6tr\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.147357 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c3367b-9b2c-40a4-841d-7b815bbfd45a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.406753 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.423674 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-545f7c69fd-hm6nm"] Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.435501 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.449031 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-75c68767d8-c7w2z"] Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.777073 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2e158c-acf0-4642-b31b-4db17087c69c" path="/var/lib/kubelet/pods/1b2e158c-acf0-4642-b31b-4db17087c69c/volumes" Jan 26 16:02:58 crc kubenswrapper[4896]: I0126 16:02:58.779055 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" path="/var/lib/kubelet/pods/38c3367b-9b2c-40a4-841d-7b815bbfd45a/volumes" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.456851 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.580709 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjb9\" (UniqueName: \"kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9\") pod \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.581072 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data\") pod \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.581144 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle\") pod \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.581177 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts\") pod \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\" (UID: \"aff0b31b-d778-4b0c-aa5a-c8b42d08e462\") " Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.588324 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9" (OuterVolumeSpecName: "kube-api-access-mkjb9") pod "aff0b31b-d778-4b0c-aa5a-c8b42d08e462" (UID: "aff0b31b-d778-4b0c-aa5a-c8b42d08e462"). InnerVolumeSpecName "kube-api-access-mkjb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.589469 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts" (OuterVolumeSpecName: "scripts") pod "aff0b31b-d778-4b0c-aa5a-c8b42d08e462" (UID: "aff0b31b-d778-4b0c-aa5a-c8b42d08e462"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.613083 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data" (OuterVolumeSpecName: "config-data") pod "aff0b31b-d778-4b0c-aa5a-c8b42d08e462" (UID: "aff0b31b-d778-4b0c-aa5a-c8b42d08e462"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.648140 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff0b31b-d778-4b0c-aa5a-c8b42d08e462" (UID: "aff0b31b-d778-4b0c-aa5a-c8b42d08e462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.687959 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.688005 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjb9\" (UniqueName: \"kubernetes.io/projected/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-kube-api-access-mkjb9\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.688021 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.688035 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff0b31b-d778-4b0c-aa5a-c8b42d08e462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:02:59 crc kubenswrapper[4896]: I0126 16:02:59.759599 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:02:59 crc kubenswrapper[4896]: E0126 16:02:59.760143 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.070362 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" event={"ID":"aff0b31b-d778-4b0c-aa5a-c8b42d08e462","Type":"ContainerDied","Data":"221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294"} Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.071019 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221790a8be9484d0bcea077cf9e21e8b1cb8b632afc2bdfc4c3cd71630f92294" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.070509 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m5nn6" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.127965 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:03:00 crc kubenswrapper[4896]: E0126 16:03:00.128533 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2e158c-acf0-4642-b31b-4db17087c69c" containerName="heat-cfnapi" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128551 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2e158c-acf0-4642-b31b-4db17087c69c" containerName="heat-cfnapi" Jan 26 16:03:00 crc kubenswrapper[4896]: E0126 16:03:00.128635 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" containerName="heat-api" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128645 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" containerName="heat-api" Jan 26 16:03:00 crc kubenswrapper[4896]: E0126 16:03:00.128658 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff0b31b-d778-4b0c-aa5a-c8b42d08e462" containerName="nova-cell0-conductor-db-sync" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128664 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff0b31b-d778-4b0c-aa5a-c8b42d08e462" containerName="nova-cell0-conductor-db-sync" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128876 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff0b31b-d778-4b0c-aa5a-c8b42d08e462" containerName="nova-cell0-conductor-db-sync" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128902 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2e158c-acf0-4642-b31b-4db17087c69c" containerName="heat-cfnapi" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.128938 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c3367b-9b2c-40a4-841d-7b815bbfd45a" containerName="heat-api" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.129814 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.136498 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.138630 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fb2z7" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.163007 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.302647 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.303623 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.303934 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkwk\" (UniqueName: \"kubernetes.io/projected/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-kube-api-access-xgkwk\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.406088 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.406185 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgkwk\" (UniqueName: \"kubernetes.io/projected/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-kube-api-access-xgkwk\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.406279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.410251 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.411199 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.426176 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgkwk\" (UniqueName: \"kubernetes.io/projected/d2973b9b-99d5-4b8e-890e-8eb577ac52b8-kube-api-access-xgkwk\") pod \"nova-cell0-conductor-0\" (UID: \"d2973b9b-99d5-4b8e-890e-8eb577ac52b8\") " pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.456512 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:00 crc kubenswrapper[4896]: I0126 16:03:00.923252 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 16:03:00 crc kubenswrapper[4896]: W0126 16:03:00.925452 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2973b9b_99d5_4b8e_890e_8eb577ac52b8.slice/crio-890b882b333780254148faf7deea3313abc47ca059f4ea3e0896d86756eab969 WatchSource:0}: Error finding container 890b882b333780254148faf7deea3313abc47ca059f4ea3e0896d86756eab969: Status 404 returned error can't find the container with id 890b882b333780254148faf7deea3313abc47ca059f4ea3e0896d86756eab969 Jan 26 16:03:01 crc kubenswrapper[4896]: I0126 16:03:01.082640 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d2973b9b-99d5-4b8e-890e-8eb577ac52b8","Type":"ContainerStarted","Data":"890b882b333780254148faf7deea3313abc47ca059f4ea3e0896d86756eab969"} Jan 26 16:03:02 crc kubenswrapper[4896]: I0126 16:03:02.098489 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d2973b9b-99d5-4b8e-890e-8eb577ac52b8","Type":"ContainerStarted","Data":"b25c9f3b0289edc2bb3fdab86f22508ac895d4e3c8b697a920b2d8702f695a95"} Jan 26 16:03:02 crc kubenswrapper[4896]: I0126 16:03:02.100180 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:02 crc kubenswrapper[4896]: I0126 16:03:02.123524 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.123497436 podStartE2EDuration="2.123497436s" podCreationTimestamp="2026-01-26 16:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:02.118416945 +0000 UTC m=+1739.900297338" watchObservedRunningTime="2026-01-26 16:03:02.123497436 +0000 UTC m=+1739.905377839" Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.155542 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.157331 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-central-agent" containerID="cri-o://9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7" gracePeriod=30 Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.157400 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="proxy-httpd" containerID="cri-o://c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2" gracePeriod=30 Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.157427 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="sg-core" containerID="cri-o://9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe" gracePeriod=30 Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.157427 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-notification-agent" containerID="cri-o://0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0" gracePeriod=30 Jan 26 16:03:05 crc kubenswrapper[4896]: I0126 16:03:05.163005 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": EOF" Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.145750 4896 generic.go:334] "Generic (PLEG): container finished" podID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerID="c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2" exitCode=0 Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.146048 4896 generic.go:334] "Generic (PLEG): container finished" podID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerID="9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe" exitCode=2 Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.146059 4896 generic.go:334] "Generic (PLEG): container finished" podID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerID="9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7" exitCode=0 Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.146081 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerDied","Data":"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2"} Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.146108 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerDied","Data":"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe"} Jan 26 16:03:06 crc kubenswrapper[4896]: I0126 16:03:06.146118 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerDied","Data":"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7"} Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.110440 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.171628 4896 generic.go:334] "Generic (PLEG): container finished" podID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerID="0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0" exitCode=0 Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.171670 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerDied","Data":"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0"} Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.171696 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d0aa4fb-f613-494e-9714-e42d73efd11f","Type":"ContainerDied","Data":"239c53f207354cbc72122787da2af18b8c5ac865fb74e5fd43944905388a6863"} Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.171713 4896 scope.go:117] "RemoveContainer" containerID="c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.171863 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.193047 4896 scope.go:117] "RemoveContainer" containerID="9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.217178 4896 scope.go:117] "RemoveContainer" containerID="0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225557 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225632 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225668 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225735 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225770 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.225840 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.226015 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b5ff\" (UniqueName: \"kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff\") pod \"9d0aa4fb-f613-494e-9714-e42d73efd11f\" (UID: \"9d0aa4fb-f613-494e-9714-e42d73efd11f\") " Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.226111 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.226179 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.226604 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.226618 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d0aa4fb-f613-494e-9714-e42d73efd11f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.230965 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts" (OuterVolumeSpecName: "scripts") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.231902 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff" (OuterVolumeSpecName: "kube-api-access-2b5ff") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "kube-api-access-2b5ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.254987 4896 scope.go:117] "RemoveContainer" containerID="9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.270213 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.329239 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b5ff\" (UniqueName: \"kubernetes.io/projected/9d0aa4fb-f613-494e-9714-e42d73efd11f-kube-api-access-2b5ff\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.329291 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.329310 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.347859 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.351244 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data" (OuterVolumeSpecName: "config-data") pod "9d0aa4fb-f613-494e-9714-e42d73efd11f" (UID: "9d0aa4fb-f613-494e-9714-e42d73efd11f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.382151 4896 scope.go:117] "RemoveContainer" containerID="c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.383991 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2\": container with ID starting with c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2 not found: ID does not exist" containerID="c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.384118 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2"} err="failed to get container status \"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2\": rpc error: code = NotFound desc = could not find container \"c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2\": container with ID starting with c60ebdeb9974d5dc54c38a3fa957ab0c26e5489c2372756e998e7c646ea7f4b2 not found: ID does not exist" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.384219 4896 scope.go:117] "RemoveContainer" containerID="9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.386367 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe\": container with ID starting with 9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe not found: ID does not exist" containerID="9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.386403 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe"} err="failed to get container status \"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe\": rpc error: code = NotFound desc = could not find container \"9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe\": container with ID starting with 9db8c6e1cfb7a0a9028db1e0a3288cdb2696eec501d323418412a6d4a974fcfe not found: ID does not exist" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.386423 4896 scope.go:117] "RemoveContainer" containerID="0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.387671 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0\": container with ID starting with 0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0 not found: ID does not exist" containerID="0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.387759 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0"} err="failed to get container status \"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0\": rpc error: code = NotFound desc = could not find container \"0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0\": container with ID starting with 0367c8c48b5c5501d31486b209e64060971173fe82aa3dc7b44b7761f7ae87a0 not found: ID does not exist" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.387835 4896 scope.go:117] "RemoveContainer" containerID="9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.388807 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7\": container with ID starting with 9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7 not found: ID does not exist" containerID="9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.388890 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7"} err="failed to get container status \"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7\": rpc error: code = NotFound desc = could not find container \"9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7\": container with ID starting with 9d10eee317ef3d076adbbd8999d07bc3a49fc19384c085a915e4bc62535b3cd7 not found: ID does not exist" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.431990 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.432039 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0aa4fb-f613-494e-9714-e42d73efd11f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.513994 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.531951 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.548714 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.549270 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-central-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549296 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-central-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.549344 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="proxy-httpd" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549356 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="proxy-httpd" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.549377 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="sg-core" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549386 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="sg-core" Jan 26 16:03:08 crc kubenswrapper[4896]: E0126 16:03:08.549414 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-notification-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549423 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-notification-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549715 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="sg-core" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549746 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-central-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549755 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="proxy-httpd" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.549777 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" containerName="ceilometer-notification-agent" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.553129 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.555334 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.555425 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.569143 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.638215 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.638521 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.638680 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.638816 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.638961 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgf7\" (UniqueName: \"kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.639056 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.639199 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.742273 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vgf7\" (UniqueName: \"kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.742732 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.742832 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743054 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743132 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743273 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743481 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.743714 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.749358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.749975 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.751079 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.753085 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.766453 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vgf7\" (UniqueName: \"kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7\") pod \"ceilometer-0\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " pod="openstack/ceilometer-0" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.787824 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0aa4fb-f613-494e-9714-e42d73efd11f" path="/var/lib/kubelet/pods/9d0aa4fb-f613-494e-9714-e42d73efd11f/volumes" Jan 26 16:03:08 crc kubenswrapper[4896]: I0126 16:03:08.877076 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:09 crc kubenswrapper[4896]: I0126 16:03:09.353781 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:10 crc kubenswrapper[4896]: I0126 16:03:10.200215 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerStarted","Data":"3da17c3182eecec9563d4692adb1f813c7b6135c1166b7717e7a6be20c9a7bc3"} Jan 26 16:03:10 crc kubenswrapper[4896]: I0126 16:03:10.200274 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerStarted","Data":"3b44c84595632eb5e0b60ca35ca75fc50c91255b75be9fec20b5ed1ced0781ef"} Jan 26 16:03:10 crc kubenswrapper[4896]: I0126 16:03:10.523904 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 16:03:10 crc kubenswrapper[4896]: I0126 16:03:10.760933 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:03:10 crc kubenswrapper[4896]: E0126 16:03:10.761354 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.088810 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-g22d4"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.091425 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.099815 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.099815 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.101689 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g22d4"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.210197 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.210276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.210360 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.210434 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbb84\" (UniqueName: \"kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.223764 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerStarted","Data":"a80b44a735ffa6a0eb4a2adc9959ce3ccc190cbb73dd4090c0867ede8d15379d"} Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.282388 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.284275 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.293567 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.314956 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbb84\" (UniqueName: \"kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315087 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315146 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315162 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vphsf\" (UniqueName: \"kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315190 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315228 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315278 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.315298 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.316976 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.329850 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.331361 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.341184 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.355319 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbb84\" (UniqueName: \"kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84\") pod \"nova-cell0-cell-mapping-g22d4\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.359645 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gf8rp"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.361431 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.405741 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gf8rp"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.414768 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.418118 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vphsf\" (UniqueName: \"kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.418323 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.418429 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.418594 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwjw\" (UniqueName: \"kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.419156 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.425219 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.427237 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.429338 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.435135 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.468181 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vphsf\" (UniqueName: \"kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf\") pod \"nova-api-0\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.529730 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.531206 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwwjw\" (UniqueName: \"kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.536887 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.559032 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.560973 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.580864 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.595426 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwwjw\" (UniqueName: \"kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw\") pod \"aodh-db-create-gf8rp\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.615568 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.636236 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.644704 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhjz\" (UniqueName: \"kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.644767 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.644805 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.656707 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.660730 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.666334 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.688846 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.690311 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.703771 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.712962 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.746821 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.746902 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zg6\" (UniqueName: \"kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.746975 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747019 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747035 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747112 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhjz\" (UniqueName: \"kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747136 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747164 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747196 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscbp\" (UniqueName: \"kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.747215 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.756546 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.757948 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.804664 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhjz\" (UniqueName: \"kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz\") pod \"nova-scheduler-0\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.810630 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-30c4-account-create-update-s2cd2"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.812303 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.826628 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851360 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851438 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851457 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851513 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5v8\" (UniqueName: \"kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851566 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851627 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qscbp\" (UniqueName: \"kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851679 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851732 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7zg6\" (UniqueName: \"kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.851785 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.856057 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.860127 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.863235 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.865420 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.867859 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.872255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.872340 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.890273 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscbp\" (UniqueName: \"kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp\") pod \"nova-metadata-0\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " pod="openstack/nova-metadata-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.917515 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.918064 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7zg6\" (UniqueName: \"kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.930451 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-30c4-account-create-update-s2cd2"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.953506 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.953650 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5v8\" (UniqueName: \"kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.955345 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.956927 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.966732 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.966833 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.986214 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5v8\" (UniqueName: \"kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8\") pod \"aodh-30c4-account-create-update-s2cd2\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:11 crc kubenswrapper[4896]: I0126 16:03:11.993125 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.049388 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.051790 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.058720 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.058953 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.059118 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.059406 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzj7l\" (UniqueName: \"kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.059693 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.059851 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.169923 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.170324 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.170566 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.170673 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.170758 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.170787 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzj7l\" (UniqueName: \"kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.171070 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.171870 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.171959 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.172060 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.174937 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.210918 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzj7l\" (UniqueName: \"kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l\") pod \"dnsmasq-dns-9b86998b5-xkmjm\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.396692 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.410927 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g22d4"] Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.446854 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerStarted","Data":"eeef585fe7cddd7e7e1189401afbe548a704658d05100b0216fa9fdd2213772a"} Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.577860 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:12 crc kubenswrapper[4896]: W0126 16:03:12.580182 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4002ca1_76f0_4c36_bd5b_441b4d16013d.slice/crio-68f9852cca7c395a0bac6c85eccd27f619159056d9407c4118b544109bbf1799 WatchSource:0}: Error finding container 68f9852cca7c395a0bac6c85eccd27f619159056d9407c4118b544109bbf1799: Status 404 returned error can't find the container with id 68f9852cca7c395a0bac6c85eccd27f619159056d9407c4118b544109bbf1799 Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.850950 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gf8rp"] Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.851364 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4r45x"] Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.853220 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.855797 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.855930 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.860496 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4r45x"] Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.874315 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:12 crc kubenswrapper[4896]: I0126 16:03:12.882342 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.051060 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.051141 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.051179 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.051347 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgj2q\" (UniqueName: \"kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.154554 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.154826 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.155047 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgj2q\" (UniqueName: \"kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.156089 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.166191 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.172538 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.173359 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.178451 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgj2q\" (UniqueName: \"kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q\") pod \"nova-cell1-conductor-db-sync-4r45x\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.339053 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.351842 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.384762 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-30c4-account-create-update-s2cd2"] Jan 26 16:03:13 crc kubenswrapper[4896]: W0126 16:03:13.391360 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4590bd_1807_4ea4_8bc6_303844d873f1.slice/crio-cba1399f379e9b2c42ac80beed515a24288764325d00bfd038e97f8c49cd4c78 WatchSource:0}: Error finding container cba1399f379e9b2c42ac80beed515a24288764325d00bfd038e97f8c49cd4c78: Status 404 returned error can't find the container with id cba1399f379e9b2c42ac80beed515a24288764325d00bfd038e97f8c49cd4c78 Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.532001 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerStarted","Data":"68f9852cca7c395a0bac6c85eccd27f619159056d9407c4118b544109bbf1799"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.539874 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g22d4" event={"ID":"ab180a60-67ff-4295-8120-a9abca520ee8","Type":"ContainerStarted","Data":"044689026bb29f35af87ff22dcb3b205ef9ab1cd408d4aa40301a391b4d6aa16"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.540216 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g22d4" event={"ID":"ab180a60-67ff-4295-8120-a9abca520ee8","Type":"ContainerStarted","Data":"e6262c9b76bdd47aefd84d4e7010fdac05660347e7ab49243bbb6a8ad6467556"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.567809 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"12011422-6209-4261-8a15-6d7033a9c33e","Type":"ContainerStarted","Data":"d19150ab231024b7ed3f0e403cb2c6ac991897cb617a8cb7441fb32dc298b7b3"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.578931 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerStarted","Data":"aa27e10810b475cbf9ff80a1de74cb2d0eaa2142302aa19e4516dd3044cb82a6"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.582343 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff4590bd-1807-4ea4-8bc6-303844d873f1","Type":"ContainerStarted","Data":"cba1399f379e9b2c42ac80beed515a24288764325d00bfd038e97f8c49cd4c78"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.597401 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.603770 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-g22d4" podStartSLOduration=2.603741994 podStartE2EDuration="2.603741994s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:13.580098982 +0000 UTC m=+1751.361979365" watchObservedRunningTime="2026-01-26 16:03:13.603741994 +0000 UTC m=+1751.385622397" Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.609975 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gf8rp" event={"ID":"2ba1a60c-8108-4ef6-a04c-c30d77f58d51","Type":"ContainerStarted","Data":"d5de8a82fdf3faccd6e468c25242c16e1dbc155fd5c98b21006b7d646a92cd38"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.610021 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gf8rp" event={"ID":"2ba1a60c-8108-4ef6-a04c-c30d77f58d51","Type":"ContainerStarted","Data":"df7ed8c65b76a48db2ac66a8cd678f75390dd8cbdf2ae6fdc64cb018d753e0b1"} Jan 26 16:03:13 crc kubenswrapper[4896]: I0126 16:03:13.618941 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-30c4-account-create-update-s2cd2" event={"ID":"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801","Type":"ContainerStarted","Data":"c9331ffef73a672825be099d2fc52f3a5ad363023870069e6c1301fe9a24db22"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.114686 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4r45x"] Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.640141 4896 generic.go:334] "Generic (PLEG): container finished" podID="f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" containerID="2a965a7e2ded6a062041eef6de0aee5fd40607e569ed099149be39e866fc5dff" exitCode=0 Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.640384 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-30c4-account-create-update-s2cd2" event={"ID":"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801","Type":"ContainerDied","Data":"2a965a7e2ded6a062041eef6de0aee5fd40607e569ed099149be39e866fc5dff"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.643541 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4r45x" event={"ID":"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a","Type":"ContainerStarted","Data":"d279cec2a24c6a27a6ffce68842a16292aca894fba94d1b4c4ede8df09a3898b"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.653381 4896 generic.go:334] "Generic (PLEG): container finished" podID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerID="733e488dbf7f268fceb01b29ba039a5694d26bd1283e479b61276edb7d770c46" exitCode=0 Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.653451 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" event={"ID":"9f64215e-328e-46a2-b3ee-4518c095ba5f","Type":"ContainerDied","Data":"733e488dbf7f268fceb01b29ba039a5694d26bd1283e479b61276edb7d770c46"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.653478 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" event={"ID":"9f64215e-328e-46a2-b3ee-4518c095ba5f","Type":"ContainerStarted","Data":"04c5cfa93d31978aaff485b4c0892f0d6797947e43a91a2c1e61a7d3d26b51d4"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.693507 4896 generic.go:334] "Generic (PLEG): container finished" podID="2ba1a60c-8108-4ef6-a04c-c30d77f58d51" containerID="d5de8a82fdf3faccd6e468c25242c16e1dbc155fd5c98b21006b7d646a92cd38" exitCode=0 Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.693615 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gf8rp" event={"ID":"2ba1a60c-8108-4ef6-a04c-c30d77f58d51","Type":"ContainerDied","Data":"d5de8a82fdf3faccd6e468c25242c16e1dbc155fd5c98b21006b7d646a92cd38"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.738652 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerStarted","Data":"91f3fa0a290a6d6950decef310d7f5046bd0be5f9652bb5c63479ed6b8f3ac97"} Jan 26 16:03:14 crc kubenswrapper[4896]: I0126 16:03:14.738725 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.302814 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.329451 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.76228316 podStartE2EDuration="7.329429834s" podCreationTimestamp="2026-01-26 16:03:08 +0000 UTC" firstStartedPulling="2026-01-26 16:03:09.361977157 +0000 UTC m=+1747.143857560" lastFinishedPulling="2026-01-26 16:03:13.929123841 +0000 UTC m=+1751.711004234" observedRunningTime="2026-01-26 16:03:14.817975028 +0000 UTC m=+1752.599855421" watchObservedRunningTime="2026-01-26 16:03:15.329429834 +0000 UTC m=+1753.111310217" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.420674 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.434419 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.455994 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwwjw\" (UniqueName: \"kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw\") pod \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.456094 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts\") pod \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\" (UID: \"2ba1a60c-8108-4ef6-a04c-c30d77f58d51\") " Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.457244 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ba1a60c-8108-4ef6-a04c-c30d77f58d51" (UID: "2ba1a60c-8108-4ef6-a04c-c30d77f58d51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.461804 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw" (OuterVolumeSpecName: "kube-api-access-hwwjw") pod "2ba1a60c-8108-4ef6-a04c-c30d77f58d51" (UID: "2ba1a60c-8108-4ef6-a04c-c30d77f58d51"). InnerVolumeSpecName "kube-api-access-hwwjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.565544 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwwjw\" (UniqueName: \"kubernetes.io/projected/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-kube-api-access-hwwjw\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.565608 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba1a60c-8108-4ef6-a04c-c30d77f58d51-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.754248 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4r45x" event={"ID":"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a","Type":"ContainerStarted","Data":"02260fe602edd838d509d51a7a0433b3c4ad3f1cc6d647aaa436ca5960419349"} Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.757258 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" event={"ID":"9f64215e-328e-46a2-b3ee-4518c095ba5f","Type":"ContainerStarted","Data":"edd3597a03e34145d5d65bb5f8d2e869a43e55a95cda96b2805733767ecca95b"} Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.757468 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.759960 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gf8rp" event={"ID":"2ba1a60c-8108-4ef6-a04c-c30d77f58d51","Type":"ContainerDied","Data":"df7ed8c65b76a48db2ac66a8cd678f75390dd8cbdf2ae6fdc64cb018d753e0b1"} Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.760123 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7ed8c65b76a48db2ac66a8cd678f75390dd8cbdf2ae6fdc64cb018d753e0b1" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.760267 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gf8rp" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.776417 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4r45x" podStartSLOduration=3.7763971229999997 podStartE2EDuration="3.776397123s" podCreationTimestamp="2026-01-26 16:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:15.776177567 +0000 UTC m=+1753.558057960" watchObservedRunningTime="2026-01-26 16:03:15.776397123 +0000 UTC m=+1753.558277516" Jan 26 16:03:15 crc kubenswrapper[4896]: I0126 16:03:15.813618 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" podStartSLOduration=4.813570588 podStartE2EDuration="4.813570588s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:15.806503293 +0000 UTC m=+1753.588383686" watchObservedRunningTime="2026-01-26 16:03:15.813570588 +0000 UTC m=+1753.595450981" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.412756 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.530454 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5v8\" (UniqueName: \"kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8\") pod \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.530594 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts\") pod \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\" (UID: \"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801\") " Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.531420 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" (UID: "f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.536228 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8" (OuterVolumeSpecName: "kube-api-access-ft5v8") pod "f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" (UID: "f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801"). InnerVolumeSpecName "kube-api-access-ft5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.634408 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5v8\" (UniqueName: \"kubernetes.io/projected/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-kube-api-access-ft5v8\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.634457 4896 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.795121 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-30c4-account-create-update-s2cd2" event={"ID":"f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801","Type":"ContainerDied","Data":"c9331ffef73a672825be099d2fc52f3a5ad363023870069e6c1301fe9a24db22"} Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.795182 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9331ffef73a672825be099d2fc52f3a5ad363023870069e6c1301fe9a24db22" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.795368 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-30c4-account-create-update-s2cd2" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.806644 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerStarted","Data":"2a572e150dc4585a2371cb14da31b88632b0ddb6d8f6b8781c1e36a17f8e85c6"} Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.810455 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"12011422-6209-4261-8a15-6d7033a9c33e","Type":"ContainerStarted","Data":"f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba"} Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.815159 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerStarted","Data":"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7"} Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.816343 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff4590bd-1807-4ea4-8bc6-303844d873f1","Type":"ContainerStarted","Data":"abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768"} Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.816464 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ff4590bd-1807-4ea4-8bc6-303844d873f1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768" gracePeriod=30 Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.831396 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.46505444 podStartE2EDuration="6.831375849s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="2026-01-26 16:03:12.952040782 +0000 UTC m=+1750.733921175" lastFinishedPulling="2026-01-26 16:03:17.318362191 +0000 UTC m=+1755.100242584" observedRunningTime="2026-01-26 16:03:17.829383685 +0000 UTC m=+1755.611264088" watchObservedRunningTime="2026-01-26 16:03:17.831375849 +0000 UTC m=+1755.613256242" Jan 26 16:03:17 crc kubenswrapper[4896]: I0126 16:03:17.867161 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.968828904 podStartE2EDuration="6.867127635s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="2026-01-26 16:03:13.418674072 +0000 UTC m=+1751.200554465" lastFinishedPulling="2026-01-26 16:03:17.316972803 +0000 UTC m=+1755.098853196" observedRunningTime="2026-01-26 16:03:17.853400437 +0000 UTC m=+1755.635280850" watchObservedRunningTime="2026-01-26 16:03:17.867127635 +0000 UTC m=+1755.649008028" Jan 26 16:03:18 crc kubenswrapper[4896]: I0126 16:03:18.833614 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerStarted","Data":"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980"} Jan 26 16:03:18 crc kubenswrapper[4896]: I0126 16:03:18.833984 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-log" containerID="cri-o://da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" gracePeriod=30 Jan 26 16:03:18 crc kubenswrapper[4896]: I0126 16:03:18.834315 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-metadata" containerID="cri-o://31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" gracePeriod=30 Jan 26 16:03:18 crc kubenswrapper[4896]: I0126 16:03:18.838750 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerStarted","Data":"c5ca317b32280669f54f6baa6395fac2d604ff8390d2bfd63f8489a700e1eccc"} Jan 26 16:03:18 crc kubenswrapper[4896]: I0126 16:03:18.865379 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.488152629 podStartE2EDuration="7.865359457s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="2026-01-26 16:03:12.951572379 +0000 UTC m=+1750.733452772" lastFinishedPulling="2026-01-26 16:03:17.328779207 +0000 UTC m=+1755.110659600" observedRunningTime="2026-01-26 16:03:18.862343603 +0000 UTC m=+1756.644223996" watchObservedRunningTime="2026-01-26 16:03:18.865359457 +0000 UTC m=+1756.647239850" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.699861 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.701002 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qscbp\" (UniqueName: \"kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp\") pod \"a607952c-53e6-4fa2-95e3-213bb1699cdb\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.701271 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs\") pod \"a607952c-53e6-4fa2-95e3-213bb1699cdb\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.701694 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs" (OuterVolumeSpecName: "logs") pod "a607952c-53e6-4fa2-95e3-213bb1699cdb" (UID: "a607952c-53e6-4fa2-95e3-213bb1699cdb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.705062 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data\") pod \"a607952c-53e6-4fa2-95e3-213bb1699cdb\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.705187 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle\") pod \"a607952c-53e6-4fa2-95e3-213bb1699cdb\" (UID: \"a607952c-53e6-4fa2-95e3-213bb1699cdb\") " Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.706350 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a607952c-53e6-4fa2-95e3-213bb1699cdb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.708848 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp" (OuterVolumeSpecName: "kube-api-access-qscbp") pod "a607952c-53e6-4fa2-95e3-213bb1699cdb" (UID: "a607952c-53e6-4fa2-95e3-213bb1699cdb"). InnerVolumeSpecName "kube-api-access-qscbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.729814 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.020022798 podStartE2EDuration="8.729796781s" podCreationTimestamp="2026-01-26 16:03:11 +0000 UTC" firstStartedPulling="2026-01-26 16:03:12.608240107 +0000 UTC m=+1750.390120500" lastFinishedPulling="2026-01-26 16:03:17.31801409 +0000 UTC m=+1755.099894483" observedRunningTime="2026-01-26 16:03:18.888469534 +0000 UTC m=+1756.670349947" watchObservedRunningTime="2026-01-26 16:03:19.729796781 +0000 UTC m=+1757.511677174" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.754920 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a607952c-53e6-4fa2-95e3-213bb1699cdb" (UID: "a607952c-53e6-4fa2-95e3-213bb1699cdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.761145 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data" (OuterVolumeSpecName: "config-data") pod "a607952c-53e6-4fa2-95e3-213bb1699cdb" (UID: "a607952c-53e6-4fa2-95e3-213bb1699cdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.809993 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qscbp\" (UniqueName: \"kubernetes.io/projected/a607952c-53e6-4fa2-95e3-213bb1699cdb-kube-api-access-qscbp\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.810038 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.810051 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607952c-53e6-4fa2-95e3-213bb1699cdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.855543 4896 generic.go:334] "Generic (PLEG): container finished" podID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerID="31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" exitCode=0 Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.855683 4896 generic.go:334] "Generic (PLEG): container finished" podID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerID="da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" exitCode=143 Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.856935 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.860223 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerDied","Data":"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980"} Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.860299 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerDied","Data":"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7"} Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.860318 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a607952c-53e6-4fa2-95e3-213bb1699cdb","Type":"ContainerDied","Data":"aa27e10810b475cbf9ff80a1de74cb2d0eaa2142302aa19e4516dd3044cb82a6"} Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.860341 4896 scope.go:117] "RemoveContainer" containerID="31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.908745 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.932182 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.934423 4896 scope.go:117] "RemoveContainer" containerID="da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.945676 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:19 crc kubenswrapper[4896]: E0126 16:03:19.946363 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-log" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946392 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-log" Jan 26 16:03:19 crc kubenswrapper[4896]: E0126 16:03:19.946426 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" containerName="mariadb-account-create-update" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946436 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" containerName="mariadb-account-create-update" Jan 26 16:03:19 crc kubenswrapper[4896]: E0126 16:03:19.946462 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba1a60c-8108-4ef6-a04c-c30d77f58d51" containerName="mariadb-database-create" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946471 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba1a60c-8108-4ef6-a04c-c30d77f58d51" containerName="mariadb-database-create" Jan 26 16:03:19 crc kubenswrapper[4896]: E0126 16:03:19.946498 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-metadata" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946506 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-metadata" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946793 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" containerName="mariadb-account-create-update" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946818 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-metadata" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946825 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba1a60c-8108-4ef6-a04c-c30d77f58d51" containerName="mariadb-database-create" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.946842 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" containerName="nova-metadata-log" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.948227 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.952203 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.952637 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:03:19 crc kubenswrapper[4896]: I0126 16:03:19.961346 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.002976 4896 scope.go:117] "RemoveContainer" containerID="31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" Jan 26 16:03:20 crc kubenswrapper[4896]: E0126 16:03:20.003523 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980\": container with ID starting with 31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980 not found: ID does not exist" containerID="31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.003559 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980"} err="failed to get container status \"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980\": rpc error: code = NotFound desc = could not find container \"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980\": container with ID starting with 31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980 not found: ID does not exist" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.003603 4896 scope.go:117] "RemoveContainer" containerID="da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" Jan 26 16:03:20 crc kubenswrapper[4896]: E0126 16:03:20.003952 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7\": container with ID starting with da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7 not found: ID does not exist" containerID="da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.004011 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7"} err="failed to get container status \"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7\": rpc error: code = NotFound desc = could not find container \"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7\": container with ID starting with da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7 not found: ID does not exist" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.004038 4896 scope.go:117] "RemoveContainer" containerID="31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.004533 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980"} err="failed to get container status \"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980\": rpc error: code = NotFound desc = could not find container \"31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980\": container with ID starting with 31dbf411e207566601e7a71078abc7cf0dd179aba4742cd19ce82e05f301b980 not found: ID does not exist" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.004618 4896 scope.go:117] "RemoveContainer" containerID="da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.004911 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7"} err="failed to get container status \"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7\": rpc error: code = NotFound desc = could not find container \"da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7\": container with ID starting with da50ce91237aad8ded237b33e33fe05bec485680e5ca03cbdd037d9351adfee7 not found: ID does not exist" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.026875 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.026949 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpmn\" (UniqueName: \"kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.027048 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.027478 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.027617 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.130072 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.130271 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.130302 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.130368 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.130418 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtpmn\" (UniqueName: \"kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.131153 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.135953 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.136055 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.146358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.149413 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtpmn\" (UniqueName: \"kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn\") pod \"nova-metadata-0\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.276787 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.778514 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a607952c-53e6-4fa2-95e3-213bb1699cdb" path="/var/lib/kubelet/pods/a607952c-53e6-4fa2-95e3-213bb1699cdb/volumes" Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.811057 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:20 crc kubenswrapper[4896]: W0126 16:03:20.818085 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda30b4404_78c2_4946_8ff1_dc066f855875.slice/crio-7b3f7cd3d6dd2239b515f34240a52d1611f0ad67d4a4c8938d95499d3615b201 WatchSource:0}: Error finding container 7b3f7cd3d6dd2239b515f34240a52d1611f0ad67d4a4c8938d95499d3615b201: Status 404 returned error can't find the container with id 7b3f7cd3d6dd2239b515f34240a52d1611f0ad67d4a4c8938d95499d3615b201 Jan 26 16:03:20 crc kubenswrapper[4896]: I0126 16:03:20.869555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerStarted","Data":"7b3f7cd3d6dd2239b515f34240a52d1611f0ad67d4a4c8938d95499d3615b201"} Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.616855 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.617155 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.886397 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerStarted","Data":"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6"} Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.886793 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerStarted","Data":"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837"} Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.912541 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.912519758 podStartE2EDuration="2.912519758s" podCreationTimestamp="2026-01-26 16:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:21.905654659 +0000 UTC m=+1759.687535052" watchObservedRunningTime="2026-01-26 16:03:21.912519758 +0000 UTC m=+1759.694400151" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.918673 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.918712 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.965739 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-68g8k"] Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.967719 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.969466 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.973518 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-b2ntx" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.973788 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.976546 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.978454 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-68g8k"] Jan 26 16:03:21 crc kubenswrapper[4896]: I0126 16:03:21.978784 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.056775 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.081093 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.081428 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7mp\" (UniqueName: \"kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.081894 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.082054 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.184213 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.184278 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.184472 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.184532 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7mp\" (UniqueName: \"kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.191184 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.191799 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.195745 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.212245 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7mp\" (UniqueName: \"kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp\") pod \"aodh-db-sync-68g8k\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.320168 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.398892 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.494369 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.494707 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="dnsmasq-dns" containerID="cri-o://38bfd9c495695abf12c7d410784c262d39598c2300520f6a914feb0b6d339899" gracePeriod=10 Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.702740 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.703079 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.901705 4896 generic.go:334] "Generic (PLEG): container finished" podID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerID="38bfd9c495695abf12c7d410784c262d39598c2300520f6a914feb0b6d339899" exitCode=0 Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.901782 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" event={"ID":"c3378c1f-3999-417b-8b94-ba779f8b48c3","Type":"ContainerDied","Data":"38bfd9c495695abf12c7d410784c262d39598c2300520f6a914feb0b6d339899"} Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.907639 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab180a60-67ff-4295-8120-a9abca520ee8" containerID="044689026bb29f35af87ff22dcb3b205ef9ab1cd408d4aa40301a391b4d6aa16" exitCode=0 Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.907898 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g22d4" event={"ID":"ab180a60-67ff-4295-8120-a9abca520ee8","Type":"ContainerDied","Data":"044689026bb29f35af87ff22dcb3b205ef9ab1cd408d4aa40301a391b4d6aa16"} Jan 26 16:03:22 crc kubenswrapper[4896]: W0126 16:03:22.961521 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce438fc7_3483_42e1_9230_ff161324b2a8.slice/crio-eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2 WatchSource:0}: Error finding container eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2: Status 404 returned error can't find the container with id eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2 Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.966399 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-68g8k"] Jan 26 16:03:22 crc kubenswrapper[4896]: I0126 16:03:22.990094 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.256217 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.340542 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.341209 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6rpk\" (UniqueName: \"kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.341325 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.341537 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.341709 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.341879 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0\") pod \"c3378c1f-3999-417b-8b94-ba779f8b48c3\" (UID: \"c3378c1f-3999-417b-8b94-ba779f8b48c3\") " Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.352437 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk" (OuterVolumeSpecName: "kube-api-access-b6rpk") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "kube-api-access-b6rpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.439951 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.444632 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6rpk\" (UniqueName: \"kubernetes.io/projected/c3378c1f-3999-417b-8b94-ba779f8b48c3-kube-api-access-b6rpk\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.444658 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.445190 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.459851 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.462265 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.489169 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config" (OuterVolumeSpecName: "config") pod "c3378c1f-3999-417b-8b94-ba779f8b48c3" (UID: "c3378c1f-3999-417b-8b94-ba779f8b48c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.546863 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.546917 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.546928 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.546940 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3378c1f-3999-417b-8b94-ba779f8b48c3-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.927482 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" event={"ID":"c3378c1f-3999-417b-8b94-ba779f8b48c3","Type":"ContainerDied","Data":"d45dcc9c9ddc2d4ac40b674c531e3c62689d159cb2c0b73ebc83ec2bf8644e54"} Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.927846 4896 scope.go:117] "RemoveContainer" containerID="38bfd9c495695abf12c7d410784c262d39598c2300520f6a914feb0b6d339899" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.927514 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ctvwg" Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.929567 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-68g8k" event={"ID":"ce438fc7-3483-42e1-9230-ff161324b2a8","Type":"ContainerStarted","Data":"eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2"} Jan 26 16:03:23 crc kubenswrapper[4896]: I0126 16:03:23.971943 4896 scope.go:117] "RemoveContainer" containerID="90d02fa8d12a783c6d01be119835f0f6fca38a31787460613fa4e9cc898d600e" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.017495 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.029994 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ctvwg"] Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.461637 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.642898 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data\") pod \"ab180a60-67ff-4295-8120-a9abca520ee8\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.642968 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbb84\" (UniqueName: \"kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84\") pod \"ab180a60-67ff-4295-8120-a9abca520ee8\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.643145 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts\") pod \"ab180a60-67ff-4295-8120-a9abca520ee8\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.643207 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle\") pod \"ab180a60-67ff-4295-8120-a9abca520ee8\" (UID: \"ab180a60-67ff-4295-8120-a9abca520ee8\") " Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.673940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84" (OuterVolumeSpecName: "kube-api-access-vbb84") pod "ab180a60-67ff-4295-8120-a9abca520ee8" (UID: "ab180a60-67ff-4295-8120-a9abca520ee8"). InnerVolumeSpecName "kube-api-access-vbb84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.674459 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts" (OuterVolumeSpecName: "scripts") pod "ab180a60-67ff-4295-8120-a9abca520ee8" (UID: "ab180a60-67ff-4295-8120-a9abca520ee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.690663 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab180a60-67ff-4295-8120-a9abca520ee8" (UID: "ab180a60-67ff-4295-8120-a9abca520ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.718309 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data" (OuterVolumeSpecName: "config-data") pod "ab180a60-67ff-4295-8120-a9abca520ee8" (UID: "ab180a60-67ff-4295-8120-a9abca520ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.746074 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.746112 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbb84\" (UniqueName: \"kubernetes.io/projected/ab180a60-67ff-4295-8120-a9abca520ee8-kube-api-access-vbb84\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.746123 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.746131 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab180a60-67ff-4295-8120-a9abca520ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.776285 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" path="/var/lib/kubelet/pods/c3378c1f-3999-417b-8b94-ba779f8b48c3/volumes" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.954319 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g22d4" event={"ID":"ab180a60-67ff-4295-8120-a9abca520ee8","Type":"ContainerDied","Data":"e6262c9b76bdd47aefd84d4e7010fdac05660347e7ab49243bbb6a8ad6467556"} Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.954660 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6262c9b76bdd47aefd84d4e7010fdac05660347e7ab49243bbb6a8ad6467556" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.954781 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g22d4" Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.961320 4896 generic.go:334] "Generic (PLEG): container finished" podID="68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" containerID="02260fe602edd838d509d51a7a0433b3c4ad3f1cc6d647aaa436ca5960419349" exitCode=0 Jan 26 16:03:24 crc kubenswrapper[4896]: I0126 16:03:24.961401 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4r45x" event={"ID":"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a","Type":"ContainerDied","Data":"02260fe602edd838d509d51a7a0433b3c4ad3f1cc6d647aaa436ca5960419349"} Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.118990 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.119287 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-log" containerID="cri-o://2a572e150dc4585a2371cb14da31b88632b0ddb6d8f6b8781c1e36a17f8e85c6" gracePeriod=30 Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.119888 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-api" containerID="cri-o://c5ca317b32280669f54f6baa6395fac2d604ff8390d2bfd63f8489a700e1eccc" gracePeriod=30 Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.132159 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.133095 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="12011422-6209-4261-8a15-6d7033a9c33e" containerName="nova-scheduler-scheduler" containerID="cri-o://f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" gracePeriod=30 Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.158436 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.158713 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-log" containerID="cri-o://43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" gracePeriod=30 Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.159004 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-metadata" containerID="cri-o://c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" gracePeriod=30 Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.277543 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.277627 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.759455 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:03:25 crc kubenswrapper[4896]: E0126 16:03:25.760024 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.934182 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.980209 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs\") pod \"a30b4404-78c2-4946-8ff1-dc066f855875\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.980250 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs\") pod \"a30b4404-78c2-4946-8ff1-dc066f855875\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.980301 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data\") pod \"a30b4404-78c2-4946-8ff1-dc066f855875\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.980407 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle\") pod \"a30b4404-78c2-4946-8ff1-dc066f855875\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.980519 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtpmn\" (UniqueName: \"kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn\") pod \"a30b4404-78c2-4946-8ff1-dc066f855875\" (UID: \"a30b4404-78c2-4946-8ff1-dc066f855875\") " Jan 26 16:03:25 crc kubenswrapper[4896]: I0126 16:03:25.987729 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs" (OuterVolumeSpecName: "logs") pod "a30b4404-78c2-4946-8ff1-dc066f855875" (UID: "a30b4404-78c2-4946-8ff1-dc066f855875"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004317 4896 generic.go:334] "Generic (PLEG): container finished" podID="a30b4404-78c2-4946-8ff1-dc066f855875" containerID="c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" exitCode=0 Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004350 4896 generic.go:334] "Generic (PLEG): container finished" podID="a30b4404-78c2-4946-8ff1-dc066f855875" containerID="43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" exitCode=143 Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004386 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerDied","Data":"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6"} Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerDied","Data":"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837"} Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004430 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a30b4404-78c2-4946-8ff1-dc066f855875","Type":"ContainerDied","Data":"7b3f7cd3d6dd2239b515f34240a52d1611f0ad67d4a4c8938d95499d3615b201"} Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004450 4896 scope.go:117] "RemoveContainer" containerID="c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.004626 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.022959 4896 generic.go:334] "Generic (PLEG): container finished" podID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerID="2a572e150dc4585a2371cb14da31b88632b0ddb6d8f6b8781c1e36a17f8e85c6" exitCode=143 Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.023146 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerDied","Data":"2a572e150dc4585a2371cb14da31b88632b0ddb6d8f6b8781c1e36a17f8e85c6"} Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.025133 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn" (OuterVolumeSpecName: "kube-api-access-xtpmn") pod "a30b4404-78c2-4946-8ff1-dc066f855875" (UID: "a30b4404-78c2-4946-8ff1-dc066f855875"). InnerVolumeSpecName "kube-api-access-xtpmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.039736 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a30b4404-78c2-4946-8ff1-dc066f855875" (UID: "a30b4404-78c2-4946-8ff1-dc066f855875"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.041674 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data" (OuterVolumeSpecName: "config-data") pod "a30b4404-78c2-4946-8ff1-dc066f855875" (UID: "a30b4404-78c2-4946-8ff1-dc066f855875"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.091456 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.091506 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtpmn\" (UniqueName: \"kubernetes.io/projected/a30b4404-78c2-4946-8ff1-dc066f855875-kube-api-access-xtpmn\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.091519 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a30b4404-78c2-4946-8ff1-dc066f855875-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.091530 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.093255 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a30b4404-78c2-4946-8ff1-dc066f855875" (UID: "a30b4404-78c2-4946-8ff1-dc066f855875"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.194627 4896 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a30b4404-78c2-4946-8ff1-dc066f855875-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.396425 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.420062 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.433970 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.434589 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="dnsmasq-dns" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434606 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="dnsmasq-dns" Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.434620 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-metadata" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434626 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-metadata" Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.434636 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-log" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434642 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-log" Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.434652 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab180a60-67ff-4295-8120-a9abca520ee8" containerName="nova-manage" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434657 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab180a60-67ff-4295-8120-a9abca520ee8" containerName="nova-manage" Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.434682 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="init" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434688 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="init" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434911 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-metadata" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434926 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" containerName="nova-metadata-log" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434939 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3378c1f-3999-417b-8b94-ba779f8b48c3" containerName="dnsmasq-dns" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.434946 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab180a60-67ff-4295-8120-a9abca520ee8" containerName="nova-manage" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.436232 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.439067 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.440058 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.465551 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.502025 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.502172 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.502235 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.502261 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.502313 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmjw\" (UniqueName: \"kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.606698 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.606977 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.607049 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.607073 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.607174 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwmjw\" (UniqueName: \"kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.607873 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.612252 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.613387 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.615062 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.625789 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwmjw\" (UniqueName: \"kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw\") pod \"nova-metadata-0\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.760415 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:03:26 crc kubenswrapper[4896]: I0126 16:03:26.778070 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30b4404-78c2-4946-8ff1-dc066f855875" path="/var/lib/kubelet/pods/a30b4404-78c2-4946-8ff1-dc066f855875/volumes" Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.918674 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba is running failed: container process not found" containerID="f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.919802 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba is running failed: container process not found" containerID="f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.920196 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba is running failed: container process not found" containerID="f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 16:03:26 crc kubenswrapper[4896]: E0126 16:03:26.920239 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="12011422-6209-4261-8a15-6d7033a9c33e" containerName="nova-scheduler-scheduler" Jan 26 16:03:27 crc kubenswrapper[4896]: I0126 16:03:27.036363 4896 generic.go:334] "Generic (PLEG): container finished" podID="12011422-6209-4261-8a15-6d7033a9c33e" containerID="f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" exitCode=0 Jan 26 16:03:27 crc kubenswrapper[4896]: I0126 16:03:27.036415 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"12011422-6209-4261-8a15-6d7033a9c33e","Type":"ContainerDied","Data":"f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba"} Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.405932 4896 scope.go:117] "RemoveContainer" containerID="43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.521372 4896 scope.go:117] "RemoveContainer" containerID="c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" Jan 26 16:03:29 crc kubenswrapper[4896]: E0126 16:03:29.522833 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6\": container with ID starting with c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6 not found: ID does not exist" containerID="c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.522904 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6"} err="failed to get container status \"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6\": rpc error: code = NotFound desc = could not find container \"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6\": container with ID starting with c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6 not found: ID does not exist" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.522957 4896 scope.go:117] "RemoveContainer" containerID="43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" Jan 26 16:03:29 crc kubenswrapper[4896]: E0126 16:03:29.524130 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837\": container with ID starting with 43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837 not found: ID does not exist" containerID="43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.524168 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837"} err="failed to get container status \"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837\": rpc error: code = NotFound desc = could not find container \"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837\": container with ID starting with 43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837 not found: ID does not exist" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.524191 4896 scope.go:117] "RemoveContainer" containerID="c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.524500 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6"} err="failed to get container status \"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6\": rpc error: code = NotFound desc = could not find container \"c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6\": container with ID starting with c799aa2106c0a955945430398ef53c6646bf084caa19bf6d22b8560479ace1d6 not found: ID does not exist" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.524523 4896 scope.go:117] "RemoveContainer" containerID="43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.524739 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837"} err="failed to get container status \"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837\": rpc error: code = NotFound desc = could not find container \"43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837\": container with ID starting with 43f5cdf48c15d7ee13839b02e38c19dfdc3527a253cf64adee7a12598aa0b837 not found: ID does not exist" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.792614 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.850045 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.902119 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts\") pod \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.902454 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle\") pod \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.902826 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data\") pod \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.903156 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgj2q\" (UniqueName: \"kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q\") pod \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\" (UID: \"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a\") " Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.915565 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts" (OuterVolumeSpecName: "scripts") pod "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" (UID: "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.915756 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q" (OuterVolumeSpecName: "kube-api-access-wgj2q") pod "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" (UID: "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a"). InnerVolumeSpecName "kube-api-access-wgj2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.946052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data" (OuterVolumeSpecName: "config-data") pod "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" (UID: "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:29 crc kubenswrapper[4896]: I0126 16:03:29.946439 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" (UID: "68a4442a-a1a0-4827-bba5-7c8a3ea1e80a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.012823 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data\") pod \"12011422-6209-4261-8a15-6d7033a9c33e\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.012921 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhjz\" (UniqueName: \"kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz\") pod \"12011422-6209-4261-8a15-6d7033a9c33e\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.012959 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle\") pod \"12011422-6209-4261-8a15-6d7033a9c33e\" (UID: \"12011422-6209-4261-8a15-6d7033a9c33e\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.013648 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.013659 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.013669 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.013677 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgj2q\" (UniqueName: \"kubernetes.io/projected/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a-kube-api-access-wgj2q\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.024958 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz" (OuterVolumeSpecName: "kube-api-access-fvhjz") pod "12011422-6209-4261-8a15-6d7033a9c33e" (UID: "12011422-6209-4261-8a15-6d7033a9c33e"). InnerVolumeSpecName "kube-api-access-fvhjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.115560 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvhjz\" (UniqueName: \"kubernetes.io/projected/12011422-6209-4261-8a15-6d7033a9c33e-kube-api-access-fvhjz\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.144262 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12011422-6209-4261-8a15-6d7033a9c33e" (UID: "12011422-6209-4261-8a15-6d7033a9c33e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.147108 4896 generic.go:334] "Generic (PLEG): container finished" podID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerID="c5ca317b32280669f54f6baa6395fac2d604ff8390d2bfd63f8489a700e1eccc" exitCode=0 Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.147172 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerDied","Data":"c5ca317b32280669f54f6baa6395fac2d604ff8390d2bfd63f8489a700e1eccc"} Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.148964 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4r45x" event={"ID":"68a4442a-a1a0-4827-bba5-7c8a3ea1e80a","Type":"ContainerDied","Data":"d279cec2a24c6a27a6ffce68842a16292aca894fba94d1b4c4ede8df09a3898b"} Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.148992 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d279cec2a24c6a27a6ffce68842a16292aca894fba94d1b4c4ede8df09a3898b" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.149052 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4r45x" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.153203 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"12011422-6209-4261-8a15-6d7033a9c33e","Type":"ContainerDied","Data":"d19150ab231024b7ed3f0e403cb2c6ac991897cb617a8cb7441fb32dc298b7b3"} Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.153264 4896 scope.go:117] "RemoveContainer" containerID="f814226073e32c98a9e1416e0a618f138655c3eefc88ba780db064f83cb3d7ba" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.153219 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.176184 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-68g8k" event={"ID":"ce438fc7-3483-42e1-9230-ff161324b2a8","Type":"ContainerStarted","Data":"f91eea4c57e7b50c8a9729d9c3fa9fa94c2b186af2f11809088e7ec59518842b"} Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.209210 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data" (OuterVolumeSpecName: "config-data") pod "12011422-6209-4261-8a15-6d7033a9c33e" (UID: "12011422-6209-4261-8a15-6d7033a9c33e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: E0126 16:03:30.209310 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68a4442a_a1a0_4827_bba5_7c8a3ea1e80a.slice/crio-d279cec2a24c6a27a6ffce68842a16292aca894fba94d1b4c4ede8df09a3898b\": RecentStats: unable to find data in memory cache]" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.219454 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.219487 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12011422-6209-4261-8a15-6d7033a9c33e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.345209 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.372968 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-68g8k" podStartSLOduration=2.8130053569999998 podStartE2EDuration="9.372948973s" podCreationTimestamp="2026-01-26 16:03:21 +0000 UTC" firstStartedPulling="2026-01-26 16:03:22.96370302 +0000 UTC m=+1760.745583413" lastFinishedPulling="2026-01-26 16:03:29.523646636 +0000 UTC m=+1767.305527029" observedRunningTime="2026-01-26 16:03:30.209940621 +0000 UTC m=+1767.991821004" watchObservedRunningTime="2026-01-26 16:03:30.372948973 +0000 UTC m=+1768.154829366" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.391141 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.422808 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs\") pod \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.422852 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vphsf\" (UniqueName: \"kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf\") pod \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.422978 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle\") pod \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.423089 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data\") pod \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\" (UID: \"e4002ca1-76f0-4c36-bd5b-441b4d16013d\") " Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.423685 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs" (OuterVolumeSpecName: "logs") pod "e4002ca1-76f0-4c36-bd5b-441b4d16013d" (UID: "e4002ca1-76f0-4c36-bd5b-441b4d16013d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.440717 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf" (OuterVolumeSpecName: "kube-api-access-vphsf") pod "e4002ca1-76f0-4c36-bd5b-441b4d16013d" (UID: "e4002ca1-76f0-4c36-bd5b-441b4d16013d"). InnerVolumeSpecName "kube-api-access-vphsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.463445 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data" (OuterVolumeSpecName: "config-data") pod "e4002ca1-76f0-4c36-bd5b-441b4d16013d" (UID: "e4002ca1-76f0-4c36-bd5b-441b4d16013d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.481887 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4002ca1-76f0-4c36-bd5b-441b4d16013d" (UID: "e4002ca1-76f0-4c36-bd5b-441b4d16013d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.535855 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.535908 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002ca1-76f0-4c36-bd5b-441b4d16013d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.535921 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4002ca1-76f0-4c36-bd5b-441b4d16013d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.535933 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vphsf\" (UniqueName: \"kubernetes.io/projected/e4002ca1-76f0-4c36-bd5b-441b4d16013d-kube-api-access-vphsf\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.655855 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.672864 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.893061 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12011422-6209-4261-8a15-6d7033a9c33e" path="/var/lib/kubelet/pods/12011422-6209-4261-8a15-6d7033a9c33e/volumes" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.903203 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: E0126 16:03:30.904490 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-log" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.904511 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-log" Jan 26 16:03:30 crc kubenswrapper[4896]: E0126 16:03:30.904540 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-api" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.904547 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-api" Jan 26 16:03:30 crc kubenswrapper[4896]: E0126 16:03:30.904596 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" containerName="nova-cell1-conductor-db-sync" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.904605 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" containerName="nova-cell1-conductor-db-sync" Jan 26 16:03:30 crc kubenswrapper[4896]: E0126 16:03:30.904621 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12011422-6209-4261-8a15-6d7033a9c33e" containerName="nova-scheduler-scheduler" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.904627 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="12011422-6209-4261-8a15-6d7033a9c33e" containerName="nova-scheduler-scheduler" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.905066 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-log" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.905088 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" containerName="nova-cell1-conductor-db-sync" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.905103 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" containerName="nova-api-api" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.905120 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="12011422-6209-4261-8a15-6d7033a9c33e" containerName="nova-scheduler-scheduler" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.906276 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.906370 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.910867 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.968678 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.970542 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:30 crc kubenswrapper[4896]: I0126 16:03:30.973742 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.005631 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.112823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.113050 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.113111 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.113482 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67tx\" (UniqueName: \"kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.113541 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.113718 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gnr\" (UniqueName: \"kubernetes.io/projected/def88cd3-6f9d-4df5-831c-ece4a17801ab-kube-api-access-h4gnr\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.187998 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4002ca1-76f0-4c36-bd5b-441b4d16013d","Type":"ContainerDied","Data":"68f9852cca7c395a0bac6c85eccd27f619159056d9407c4118b544109bbf1799"} Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.188065 4896 scope.go:117] "RemoveContainer" containerID="c5ca317b32280669f54f6baa6395fac2d604ff8390d2bfd63f8489a700e1eccc" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.188326 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.189484 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerStarted","Data":"6f16ab14840fdc49f2956514c7ee278076ca29f772f5d4bfe182f86fcb8f29fc"} Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.189519 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerStarted","Data":"632d78657d7b4109e48d5784d55f84027060be78dba178057cf13514f5c91c0e"} Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.214954 4896 scope.go:117] "RemoveContainer" containerID="2a572e150dc4585a2371cb14da31b88632b0ddb6d8f6b8781c1e36a17f8e85c6" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.215026 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.216816 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gnr\" (UniqueName: \"kubernetes.io/projected/def88cd3-6f9d-4df5-831c-ece4a17801ab-kube-api-access-h4gnr\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.216940 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.217096 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.217154 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.217266 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67tx\" (UniqueName: \"kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.217291 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.222680 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.226486 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.227343 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.233122 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.236657 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def88cd3-6f9d-4df5-831c-ece4a17801ab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.246403 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67tx\" (UniqueName: \"kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx\") pod \"nova-scheduler-0\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.256307 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.257143 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gnr\" (UniqueName: \"kubernetes.io/projected/def88cd3-6f9d-4df5-831c-ece4a17801ab-kube-api-access-h4gnr\") pod \"nova-cell1-conductor-0\" (UID: \"def88cd3-6f9d-4df5-831c-ece4a17801ab\") " pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.258339 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.265259 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.268431 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.278907 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.427768 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.428293 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.428553 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjxq\" (UniqueName: \"kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.428648 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.452280 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.544946 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.546264 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjxq\" (UniqueName: \"kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.546376 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.546714 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.547335 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.555444 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.556638 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.572801 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjxq\" (UniqueName: \"kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq\") pod \"nova-api-0\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.590390 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:03:31 crc kubenswrapper[4896]: W0126 16:03:31.825184 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f38b0d_f4f6_4780_a38a_dd8b97f38371.slice/crio-f5e811b9b79dbcb450e19aefb07b5723cde371831e5941f05036a2692890ab3a WatchSource:0}: Error finding container f5e811b9b79dbcb450e19aefb07b5723cde371831e5941f05036a2692890ab3a: Status 404 returned error can't find the container with id f5e811b9b79dbcb450e19aefb07b5723cde371831e5941f05036a2692890ab3a Jan 26 16:03:31 crc kubenswrapper[4896]: I0126 16:03:31.831739 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.065635 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 16:03:32 crc kubenswrapper[4896]: W0126 16:03:32.068744 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddef88cd3_6f9d_4df5_831c_ece4a17801ab.slice/crio-cddb2d2af3e9299f2ad6129174a0ed593683ae34fa51a527f345faab231abe82 WatchSource:0}: Error finding container cddb2d2af3e9299f2ad6129174a0ed593683ae34fa51a527f345faab231abe82: Status 404 returned error can't find the container with id cddb2d2af3e9299f2ad6129174a0ed593683ae34fa51a527f345faab231abe82 Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.206746 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerStarted","Data":"478b1e16ac6ad57e3cfe3194b5682622c3daba112250171f17f5fee1ceeebc5c"} Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.212253 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"def88cd3-6f9d-4df5-831c-ece4a17801ab","Type":"ContainerStarted","Data":"cddb2d2af3e9299f2ad6129174a0ed593683ae34fa51a527f345faab231abe82"} Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.216414 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01f38b0d-f4f6-4780-a38a-dd8b97f38371","Type":"ContainerStarted","Data":"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89"} Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.216462 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01f38b0d-f4f6-4780-a38a-dd8b97f38371","Type":"ContainerStarted","Data":"f5e811b9b79dbcb450e19aefb07b5723cde371831e5941f05036a2692890ab3a"} Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.238508 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=6.238481409 podStartE2EDuration="6.238481409s" podCreationTimestamp="2026-01-26 16:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:32.229136941 +0000 UTC m=+1770.011017334" watchObservedRunningTime="2026-01-26 16:03:32.238481409 +0000 UTC m=+1770.020361802" Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.266681 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:32 crc kubenswrapper[4896]: W0126 16:03:32.267561 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06f9e75_223d_4cbb_81db_d4dd2887dab2.slice/crio-79392f56be3e65d5b70bdfba8c9fef4c2be8cb0754fce1180f7fb447970e32e2 WatchSource:0}: Error finding container 79392f56be3e65d5b70bdfba8c9fef4c2be8cb0754fce1180f7fb447970e32e2: Status 404 returned error can't find the container with id 79392f56be3e65d5b70bdfba8c9fef4c2be8cb0754fce1180f7fb447970e32e2 Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.275376 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.275356885 podStartE2EDuration="2.275356885s" podCreationTimestamp="2026-01-26 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:32.24870013 +0000 UTC m=+1770.030580533" watchObservedRunningTime="2026-01-26 16:03:32.275356885 +0000 UTC m=+1770.057237278" Jan 26 16:03:32 crc kubenswrapper[4896]: I0126 16:03:32.786271 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4002ca1-76f0-4c36-bd5b-441b4d16013d" path="/var/lib/kubelet/pods/e4002ca1-76f0-4c36-bd5b-441b4d16013d/volumes" Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.236570 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerStarted","Data":"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34"} Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.237216 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerStarted","Data":"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9"} Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.237230 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerStarted","Data":"79392f56be3e65d5b70bdfba8c9fef4c2be8cb0754fce1180f7fb447970e32e2"} Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.240346 4896 generic.go:334] "Generic (PLEG): container finished" podID="ce438fc7-3483-42e1-9230-ff161324b2a8" containerID="f91eea4c57e7b50c8a9729d9c3fa9fa94c2b186af2f11809088e7ec59518842b" exitCode=0 Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.240428 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-68g8k" event={"ID":"ce438fc7-3483-42e1-9230-ff161324b2a8","Type":"ContainerDied","Data":"f91eea4c57e7b50c8a9729d9c3fa9fa94c2b186af2f11809088e7ec59518842b"} Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.242882 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"def88cd3-6f9d-4df5-831c-ece4a17801ab","Type":"ContainerStarted","Data":"0a1d75ff7416c84ef21f778c4fb67cbd11d43e8d90823d6b55450708dfc47398"} Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.243080 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.258635 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.258617424 podStartE2EDuration="2.258617424s" podCreationTimestamp="2026-01-26 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:33.258606003 +0000 UTC m=+1771.040486406" watchObservedRunningTime="2026-01-26 16:03:33.258617424 +0000 UTC m=+1771.040497817" Jan 26 16:03:33 crc kubenswrapper[4896]: I0126 16:03:33.316238 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.316217362 podStartE2EDuration="3.316217362s" podCreationTimestamp="2026-01-26 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:33.296273752 +0000 UTC m=+1771.078154145" watchObservedRunningTime="2026-01-26 16:03:33.316217362 +0000 UTC m=+1771.098097755" Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.907979 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.948523 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle\") pod \"ce438fc7-3483-42e1-9230-ff161324b2a8\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.948614 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts\") pod \"ce438fc7-3483-42e1-9230-ff161324b2a8\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.948947 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7mp\" (UniqueName: \"kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp\") pod \"ce438fc7-3483-42e1-9230-ff161324b2a8\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.960013 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts" (OuterVolumeSpecName: "scripts") pod "ce438fc7-3483-42e1-9230-ff161324b2a8" (UID: "ce438fc7-3483-42e1-9230-ff161324b2a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.960787 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp" (OuterVolumeSpecName: "kube-api-access-sn7mp") pod "ce438fc7-3483-42e1-9230-ff161324b2a8" (UID: "ce438fc7-3483-42e1-9230-ff161324b2a8"). InnerVolumeSpecName "kube-api-access-sn7mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:34 crc kubenswrapper[4896]: I0126 16:03:34.987859 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce438fc7-3483-42e1-9230-ff161324b2a8" (UID: "ce438fc7-3483-42e1-9230-ff161324b2a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.052191 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data\") pod \"ce438fc7-3483-42e1-9230-ff161324b2a8\" (UID: \"ce438fc7-3483-42e1-9230-ff161324b2a8\") " Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.053021 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.053035 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.053044 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn7mp\" (UniqueName: \"kubernetes.io/projected/ce438fc7-3483-42e1-9230-ff161324b2a8-kube-api-access-sn7mp\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.087052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data" (OuterVolumeSpecName: "config-data") pod "ce438fc7-3483-42e1-9230-ff161324b2a8" (UID: "ce438fc7-3483-42e1-9230-ff161324b2a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.155191 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce438fc7-3483-42e1-9230-ff161324b2a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.278032 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-68g8k" event={"ID":"ce438fc7-3483-42e1-9230-ff161324b2a8","Type":"ContainerDied","Data":"eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2"} Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.278084 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb37fec064234fec14d4a770a3a28dd44f9d4b1c3a174c3092e91bed4be695d2" Jan 26 16:03:35 crc kubenswrapper[4896]: I0126 16:03:35.278555 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-68g8k" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.266591 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.865459 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:03:36 crc kubenswrapper[4896]: E0126 16:03:36.865739 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.881385 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.881434 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.881447 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:03:36 crc kubenswrapper[4896]: I0126 16:03:36.881455 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.067704 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.067999 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.538479 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 16:03:38 crc kubenswrapper[4896]: E0126 16:03:38.539693 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce438fc7-3483-42e1-9230-ff161324b2a8" containerName="aodh-db-sync" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.539722 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce438fc7-3483-42e1-9230-ff161324b2a8" containerName="aodh-db-sync" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.543737 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce438fc7-3483-42e1-9230-ff161324b2a8" containerName="aodh-db-sync" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.548331 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.553838 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-b2ntx" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.554073 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.554208 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.590657 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.657946 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.658084 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.658230 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.658267 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnm7m\" (UniqueName: \"kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.795364 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.795499 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.795639 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.795697 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnm7m\" (UniqueName: \"kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.802718 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.803014 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.813771 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnm7m\" (UniqueName: \"kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.836507 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.870832 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:03:38 crc kubenswrapper[4896]: I0126 16:03:38.892940 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:03:39 crc kubenswrapper[4896]: I0126 16:03:39.456487 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:03:40 crc kubenswrapper[4896]: I0126 16:03:40.361597 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerStarted","Data":"5af87e4ad6ffe4cf9f1e1fe24fe4c712670e610f46d2b86562b76cd9f2de964e"} Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.377619 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.402786 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerStarted","Data":"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26"} Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.424382 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.675119 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.675158 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:03:41 crc kubenswrapper[4896]: I0126 16:03:41.687095 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.018129 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.018784 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-central-agent" containerID="cri-o://3da17c3182eecec9563d4692adb1f813c7b6135c1166b7717e7a6be20c9a7bc3" gracePeriod=30 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.019367 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="proxy-httpd" containerID="cri-o://91f3fa0a290a6d6950decef310d7f5046bd0be5f9652bb5c63479ed6b8f3ac97" gracePeriod=30 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.019421 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="sg-core" containerID="cri-o://eeef585fe7cddd7e7e1189401afbe548a704658d05100b0216fa9fdd2213772a" gracePeriod=30 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.019458 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-notification-agent" containerID="cri-o://a80b44a735ffa6a0eb4a2adc9959ce3ccc190cbb73dd4090c0867ede8d15379d" gracePeriod=30 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.453625 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerID="91f3fa0a290a6d6950decef310d7f5046bd0be5f9652bb5c63479ed6b8f3ac97" exitCode=0 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.453690 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerID="eeef585fe7cddd7e7e1189401afbe548a704658d05100b0216fa9fdd2213772a" exitCode=2 Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.455683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerDied","Data":"91f3fa0a290a6d6950decef310d7f5046bd0be5f9652bb5c63479ed6b8f3ac97"} Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.455753 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerDied","Data":"eeef585fe7cddd7e7e1189401afbe548a704658d05100b0216fa9fdd2213772a"} Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.490399 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.493269 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.849876 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:42 crc kubenswrapper[4896]: I0126 16:03:42.850064 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:03:43 crc kubenswrapper[4896]: I0126 16:03:43.469039 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerStarted","Data":"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692"} Jan 26 16:03:43 crc kubenswrapper[4896]: I0126 16:03:43.472128 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerID="3da17c3182eecec9563d4692adb1f813c7b6135c1166b7717e7a6be20c9a7bc3" exitCode=0 Jan 26 16:03:43 crc kubenswrapper[4896]: I0126 16:03:43.473708 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerDied","Data":"3da17c3182eecec9563d4692adb1f813c7b6135c1166b7717e7a6be20c9a7bc3"} Jan 26 16:03:45 crc kubenswrapper[4896]: I0126 16:03:45.542570 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerID="a80b44a735ffa6a0eb4a2adc9959ce3ccc190cbb73dd4090c0867ede8d15379d" exitCode=0 Jan 26 16:03:45 crc kubenswrapper[4896]: I0126 16:03:45.543109 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerDied","Data":"a80b44a735ffa6a0eb4a2adc9959ce3ccc190cbb73dd4090c0867ede8d15379d"} Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.189847 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344341 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344547 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344655 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344760 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vgf7\" (UniqueName: \"kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344884 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.344939 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.345038 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data\") pod \"b3d27f14-359d-4ed0-91f5-107202d86bbb\" (UID: \"b3d27f14-359d-4ed0-91f5-107202d86bbb\") " Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.345741 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.346024 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.347442 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.363651 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7" (OuterVolumeSpecName: "kube-api-access-8vgf7") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "kube-api-access-8vgf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.371757 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts" (OuterVolumeSpecName: "scripts") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.395832 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.448782 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.448811 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.448826 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vgf7\" (UniqueName: \"kubernetes.io/projected/b3d27f14-359d-4ed0-91f5-107202d86bbb-kube-api-access-8vgf7\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.448835 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3d27f14-359d-4ed0-91f5-107202d86bbb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.496537 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.541082 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data" (OuterVolumeSpecName: "config-data") pod "b3d27f14-359d-4ed0-91f5-107202d86bbb" (UID: "b3d27f14-359d-4ed0-91f5-107202d86bbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.555538 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.555601 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d27f14-359d-4ed0-91f5-107202d86bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.565314 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerStarted","Data":"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488"} Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.569380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3d27f14-359d-4ed0-91f5-107202d86bbb","Type":"ContainerDied","Data":"3b44c84595632eb5e0b60ca35ca75fc50c91255b75be9fec20b5ed1ced0781ef"} Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.569643 4896 scope.go:117] "RemoveContainer" containerID="91f3fa0a290a6d6950decef310d7f5046bd0be5f9652bb5c63479ed6b8f3ac97" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.569681 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.682407 4896 scope.go:117] "RemoveContainer" containerID="eeef585fe7cddd7e7e1189401afbe548a704658d05100b0216fa9fdd2213772a" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.700606 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.715918 4896 scope.go:117] "RemoveContainer" containerID="a80b44a735ffa6a0eb4a2adc9959ce3ccc190cbb73dd4090c0867ede8d15379d" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.716360 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.746362 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:46 crc kubenswrapper[4896]: E0126 16:03:46.746899 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-central-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.746917 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-central-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: E0126 16:03:46.746958 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="sg-core" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.746967 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="sg-core" Jan 26 16:03:46 crc kubenswrapper[4896]: E0126 16:03:46.746975 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="proxy-httpd" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.746981 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="proxy-httpd" Jan 26 16:03:46 crc kubenswrapper[4896]: E0126 16:03:46.747003 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-notification-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.747010 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-notification-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.747228 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-central-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.747248 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="proxy-httpd" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.747262 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="sg-core" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.747278 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" containerName="ceilometer-notification-agent" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.749666 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.751727 4896 scope.go:117] "RemoveContainer" containerID="3da17c3182eecec9563d4692adb1f813c7b6135c1166b7717e7a6be20c9a7bc3" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.752049 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.752178 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.759809 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.759892 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.760022 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9pd\" (UniqueName: \"kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.760164 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.760252 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.760332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:46 crc kubenswrapper[4896]: I0126 16:03:46.760426 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031037 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031110 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031378 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031454 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031550 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9pd\" (UniqueName: \"kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031875 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.031942 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.032959 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.032921 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.047031 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.047739 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.047876 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.050081 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9pd\" (UniqueName: \"kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.050725 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d27f14-359d-4ed0-91f5-107202d86bbb" path="/var/lib/kubelet/pods/b3d27f14-359d-4ed0-91f5-107202d86bbb/volumes" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.051621 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.051724 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.051793 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.053322 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " pod="openstack/ceilometer-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.061289 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.062040 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:03:47 crc kubenswrapper[4896]: I0126 16:03:47.167464 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:03:48 crc kubenswrapper[4896]: E0126 16:03:48.196908 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4590bd_1807_4ea4_8bc6_303844d873f1.slice/crio-abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:03:48 crc kubenswrapper[4896]: E0126 16:03:48.241944 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4590bd_1807_4ea4_8bc6_303844d873f1.slice/crio-abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:03:48 crc kubenswrapper[4896]: I0126 16:03:48.273518 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:03:48 crc kubenswrapper[4896]: I0126 16:03:48.629855 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerStarted","Data":"362cde032ff5caa40a1bff7cc65d579e5ac2c5bb3928de168f9625e1c4c8f8ad"} Jan 26 16:03:48 crc kubenswrapper[4896]: I0126 16:03:48.656906 4896 generic.go:334] "Generic (PLEG): container finished" podID="ff4590bd-1807-4ea4-8bc6-303844d873f1" containerID="abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768" exitCode=137 Jan 26 16:03:48 crc kubenswrapper[4896]: I0126 16:03:48.657669 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff4590bd-1807-4ea4-8bc6-303844d873f1","Type":"ContainerDied","Data":"abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768"} Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.325736 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.486575 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle\") pod \"ff4590bd-1807-4ea4-8bc6-303844d873f1\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.486754 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data\") pod \"ff4590bd-1807-4ea4-8bc6-303844d873f1\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.486852 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7zg6\" (UniqueName: \"kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6\") pod \"ff4590bd-1807-4ea4-8bc6-303844d873f1\" (UID: \"ff4590bd-1807-4ea4-8bc6-303844d873f1\") " Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.504272 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6" (OuterVolumeSpecName: "kube-api-access-p7zg6") pod "ff4590bd-1807-4ea4-8bc6-303844d873f1" (UID: "ff4590bd-1807-4ea4-8bc6-303844d873f1"). InnerVolumeSpecName "kube-api-access-p7zg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.579435 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff4590bd-1807-4ea4-8bc6-303844d873f1" (UID: "ff4590bd-1807-4ea4-8bc6-303844d873f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.586846 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data" (OuterVolumeSpecName: "config-data") pod "ff4590bd-1807-4ea4-8bc6-303844d873f1" (UID: "ff4590bd-1807-4ea4-8bc6-303844d873f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.590672 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7zg6\" (UniqueName: \"kubernetes.io/projected/ff4590bd-1807-4ea4-8bc6-303844d873f1-kube-api-access-p7zg6\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.590730 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.593286 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4590bd-1807-4ea4-8bc6-303844d873f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.686230 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.686430 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff4590bd-1807-4ea4-8bc6-303844d873f1","Type":"ContainerDied","Data":"cba1399f379e9b2c42ac80beed515a24288764325d00bfd038e97f8c49cd4c78"} Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.691732 4896 scope.go:117] "RemoveContainer" containerID="abd55c23df36d63a95bc59f0abfd1c9b6b29da5b1d334037ff5c084ef0b06768" Jan 26 16:03:49 crc kubenswrapper[4896]: I0126 16:03:49.710246 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerStarted","Data":"e8feda0c5756658298ac36247615e42688d277c630ccd66c1c4a93374b3998d3"} Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.091978 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.117241 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.138656 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:50 crc kubenswrapper[4896]: E0126 16:03:50.139392 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4590bd-1807-4ea4-8bc6-303844d873f1" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.139417 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4590bd-1807-4ea4-8bc6-303844d873f1" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.139743 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4590bd-1807-4ea4-8bc6-303844d873f1" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.140933 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.143840 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.144788 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.145097 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.159230 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.196561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.196669 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.196722 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.196757 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.196788 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q97k\" (UniqueName: \"kubernetes.io/projected/b9b06167-680e-4c53-9611-d0f91a737d9e-kube-api-access-6q97k\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.300932 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.301038 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.301118 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.301180 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.301204 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q97k\" (UniqueName: \"kubernetes.io/projected/b9b06167-680e-4c53-9611-d0f91a737d9e-kube-api-access-6q97k\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.316258 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.316309 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.318992 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.321525 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b06167-680e-4c53-9611-d0f91a737d9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.343257 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q97k\" (UniqueName: \"kubernetes.io/projected/b9b06167-680e-4c53-9611-d0f91a737d9e-kube-api-access-6q97k\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9b06167-680e-4c53-9611-d0f91a737d9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.467527 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:50 crc kubenswrapper[4896]: I0126 16:03:50.783369 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4590bd-1807-4ea4-8bc6-303844d873f1" path="/var/lib/kubelet/pods/ff4590bd-1807-4ea4-8bc6-303844d873f1/volumes" Jan 26 16:03:51 crc kubenswrapper[4896]: I0126 16:03:51.609178 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:03:51 crc kubenswrapper[4896]: I0126 16:03:51.610295 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:03:51 crc kubenswrapper[4896]: I0126 16:03:51.615225 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:03:51 crc kubenswrapper[4896]: I0126 16:03:51.627350 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:03:51 crc kubenswrapper[4896]: I0126 16:03:51.990338 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:03:51 crc kubenswrapper[4896]: E0126 16:03:51.991080 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.087049 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-api" containerID="cri-o://b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26" gracePeriod=30 Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.087298 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerStarted","Data":"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2"} Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.087324 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.089040 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-listener" containerID="cri-o://c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2" gracePeriod=30 Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.089282 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-notifier" containerID="cri-o://fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488" gracePeriod=30 Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.089438 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-evaluator" containerID="cri-o://e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692" gracePeriod=30 Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.093885 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.147770 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.4741212790000002 podStartE2EDuration="14.147737821s" podCreationTimestamp="2026-01-26 16:03:38 +0000 UTC" firstStartedPulling="2026-01-26 16:03:39.45601923 +0000 UTC m=+1777.237899623" lastFinishedPulling="2026-01-26 16:03:51.129635772 +0000 UTC m=+1788.911516165" observedRunningTime="2026-01-26 16:03:52.109722374 +0000 UTC m=+1789.891602777" watchObservedRunningTime="2026-01-26 16:03:52.147737821 +0000 UTC m=+1789.929618214" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.152101 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.533245 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.535852 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.572608 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.979801 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.980096 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.980256 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvnz\" (UniqueName: \"kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.980309 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.980407 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:52 crc kubenswrapper[4896]: I0126 16:03:52.980446 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127392 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127519 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127554 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127672 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127709 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.127919 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvnz\" (UniqueName: \"kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.133237 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.138905 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.139547 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.140151 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.148212 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.185932 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvnz\" (UniqueName: \"kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz\") pod \"dnsmasq-dns-6b7bbf7cf9-n9zsh\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.189348 4896 generic.go:334] "Generic (PLEG): container finished" podID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerID="b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26" exitCode=0 Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.230065 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.230031297 podStartE2EDuration="3.230031297s" podCreationTimestamp="2026-01-26 16:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:53.209954785 +0000 UTC m=+1790.991835178" watchObservedRunningTime="2026-01-26 16:03:53.230031297 +0000 UTC m=+1791.011911690" Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.335236 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9b06167-680e-4c53-9611-d0f91a737d9e","Type":"ContainerStarted","Data":"c320debcdb8df01e951311c6387c2da4c0e6da7ea41d54b3970b7e80e0c1ae52"} Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.335299 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9b06167-680e-4c53-9611-d0f91a737d9e","Type":"ContainerStarted","Data":"de2dfdd1a8550893ed3133fc8950789af4abba12c6f904ced0d525f2a0481dd7"} Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.335319 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerDied","Data":"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26"} Jan 26 16:03:53 crc kubenswrapper[4896]: I0126 16:03:53.486943 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.211082 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerStarted","Data":"a236494699ef2d6167dfaa6efba39a8176df05b676cd994753bcf874df77e2b8"} Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.215731 4896 generic.go:334] "Generic (PLEG): container finished" podID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerID="fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488" exitCode=0 Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.215776 4896 generic.go:334] "Generic (PLEG): container finished" podID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerID="e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692" exitCode=0 Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.217049 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerDied","Data":"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488"} Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.217133 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerDied","Data":"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692"} Jan 26 16:03:54 crc kubenswrapper[4896]: I0126 16:03:54.425508 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:03:54 crc kubenswrapper[4896]: W0126 16:03:54.425611 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcb644b1_07e5_4b54_9431_96c251d6875b.slice/crio-07d17b4484d68f813e482457c90d9ce99c8e0fc3584378a02122c819ad017e4e WatchSource:0}: Error finding container 07d17b4484d68f813e482457c90d9ce99c8e0fc3584378a02122c819ad017e4e: Status 404 returned error can't find the container with id 07d17b4484d68f813e482457c90d9ce99c8e0fc3584378a02122c819ad017e4e Jan 26 16:03:55 crc kubenswrapper[4896]: I0126 16:03:55.237081 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerStarted","Data":"b2d02313dca91665b2ea0f78af2d2158e0b68cf5bc4b43cdd286734e1f679c6e"} Jan 26 16:03:55 crc kubenswrapper[4896]: I0126 16:03:55.239811 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" event={"ID":"fcb644b1-07e5-4b54-9431-96c251d6875b","Type":"ContainerStarted","Data":"07d17b4484d68f813e482457c90d9ce99c8e0fc3584378a02122c819ad017e4e"} Jan 26 16:03:55 crc kubenswrapper[4896]: I0126 16:03:55.468600 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:03:56 crc kubenswrapper[4896]: I0126 16:03:56.254385 4896 generic.go:334] "Generic (PLEG): container finished" podID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerID="feb30a12f2b43889b02ee9c2446cb23dc185d627214d3d73b8c43a6e16b617f2" exitCode=0 Jan 26 16:03:56 crc kubenswrapper[4896]: I0126 16:03:56.254439 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" event={"ID":"fcb644b1-07e5-4b54-9431-96c251d6875b","Type":"ContainerDied","Data":"feb30a12f2b43889b02ee9c2446cb23dc185d627214d3d73b8c43a6e16b617f2"} Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.204771 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.205390 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" containerID="cri-o://f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9" gracePeriod=30 Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.205486 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" containerID="cri-o://3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34" gracePeriod=30 Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.285286 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerStarted","Data":"bc7eb8d6059789f0e9b888c4410cf160a6a627b9e52935ebe366d384621e9385"} Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.289911 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.339852 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.66411146 podStartE2EDuration="12.33982797s" podCreationTimestamp="2026-01-26 16:03:46 +0000 UTC" firstStartedPulling="2026-01-26 16:03:48.256975209 +0000 UTC m=+1786.038855602" lastFinishedPulling="2026-01-26 16:03:56.932691719 +0000 UTC m=+1794.714572112" observedRunningTime="2026-01-26 16:03:58.3202293 +0000 UTC m=+1796.102109693" watchObservedRunningTime="2026-01-26 16:03:58.33982797 +0000 UTC m=+1796.121708353" Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.345233 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" event={"ID":"fcb644b1-07e5-4b54-9431-96c251d6875b","Type":"ContainerStarted","Data":"4e435feb5abeabe0e42b0a60b5a78820c39981e6abcfa10378dc622af9029cc6"} Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.346486 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:03:58 crc kubenswrapper[4896]: I0126 16:03:58.398069 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" podStartSLOduration=6.398048704 podStartE2EDuration="6.398048704s" podCreationTimestamp="2026-01-26 16:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:03:58.386429384 +0000 UTC m=+1796.168309777" watchObservedRunningTime="2026-01-26 16:03:58.398048704 +0000 UTC m=+1796.179929097" Jan 26 16:03:59 crc kubenswrapper[4896]: I0126 16:03:59.364406 4896 generic.go:334] "Generic (PLEG): container finished" podID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerID="f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9" exitCode=143 Jan 26 16:03:59 crc kubenswrapper[4896]: I0126 16:03:59.365915 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerDied","Data":"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9"} Jan 26 16:04:00 crc kubenswrapper[4896]: I0126 16:04:00.468915 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:04:00 crc kubenswrapper[4896]: I0126 16:04:00.492330 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:04:00 crc kubenswrapper[4896]: I0126 16:04:00.676286 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.395220 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-central-agent" containerID="cri-o://e8feda0c5756658298ac36247615e42688d277c630ccd66c1c4a93374b3998d3" gracePeriod=30 Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.395876 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-notification-agent" containerID="cri-o://a236494699ef2d6167dfaa6efba39a8176df05b676cd994753bcf874df77e2b8" gracePeriod=30 Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.395884 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="proxy-httpd" containerID="cri-o://bc7eb8d6059789f0e9b888c4410cf160a6a627b9e52935ebe366d384621e9385" gracePeriod=30 Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.395973 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="sg-core" containerID="cri-o://b2d02313dca91665b2ea0f78af2d2158e0b68cf5bc4b43cdd286734e1f679c6e" gracePeriod=30 Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.442746 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.595487 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": dial tcp 10.217.1.1:8774: connect: connection refused" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.596088 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": dial tcp 10.217.1.1:8774: connect: connection refused" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.701635 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bg2nz"] Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.706672 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.717202 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.718005 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.749655 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bg2nz"] Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.904878 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.905115 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.905276 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:01 crc kubenswrapper[4896]: I0126 16:04:01.905365 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x978j\" (UniqueName: \"kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.007423 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.007878 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.007922 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x978j\" (UniqueName: \"kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.008016 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.014000 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.015298 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.016752 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.029521 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x978j\" (UniqueName: \"kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j\") pod \"nova-cell1-cell-mapping-bg2nz\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.214043 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.276800 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.419730 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data\") pod \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.420115 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmjxq\" (UniqueName: \"kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq\") pod \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.421200 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs" (OuterVolumeSpecName: "logs") pod "b06f9e75-223d-4cbb-81db-d4dd2887dab2" (UID: "b06f9e75-223d-4cbb-81db-d4dd2887dab2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.421240 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs\") pod \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.421296 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle\") pod \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\" (UID: \"b06f9e75-223d-4cbb-81db-d4dd2887dab2\") " Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.422734 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06f9e75-223d-4cbb-81db-d4dd2887dab2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.429222 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq" (OuterVolumeSpecName: "kube-api-access-jmjxq") pod "b06f9e75-223d-4cbb-81db-d4dd2887dab2" (UID: "b06f9e75-223d-4cbb-81db-d4dd2887dab2"). InnerVolumeSpecName "kube-api-access-jmjxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.460692 4896 generic.go:334] "Generic (PLEG): container finished" podID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerID="3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34" exitCode=0 Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.460769 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerDied","Data":"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34"} Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.460797 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b06f9e75-223d-4cbb-81db-d4dd2887dab2","Type":"ContainerDied","Data":"79392f56be3e65d5b70bdfba8c9fef4c2be8cb0754fce1180f7fb447970e32e2"} Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.460814 4896 scope.go:117] "RemoveContainer" containerID="3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.460826 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.476404 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data" (OuterVolumeSpecName: "config-data") pod "b06f9e75-223d-4cbb-81db-d4dd2887dab2" (UID: "b06f9e75-223d-4cbb-81db-d4dd2887dab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478432 4896 generic.go:334] "Generic (PLEG): container finished" podID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerID="bc7eb8d6059789f0e9b888c4410cf160a6a627b9e52935ebe366d384621e9385" exitCode=0 Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478468 4896 generic.go:334] "Generic (PLEG): container finished" podID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerID="b2d02313dca91665b2ea0f78af2d2158e0b68cf5bc4b43cdd286734e1f679c6e" exitCode=2 Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478478 4896 generic.go:334] "Generic (PLEG): container finished" podID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerID="a236494699ef2d6167dfaa6efba39a8176df05b676cd994753bcf874df77e2b8" exitCode=0 Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478908 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerDied","Data":"bc7eb8d6059789f0e9b888c4410cf160a6a627b9e52935ebe366d384621e9385"} Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478961 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerDied","Data":"b2d02313dca91665b2ea0f78af2d2158e0b68cf5bc4b43cdd286734e1f679c6e"} Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.478979 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerDied","Data":"a236494699ef2d6167dfaa6efba39a8176df05b676cd994753bcf874df77e2b8"} Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.491921 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b06f9e75-223d-4cbb-81db-d4dd2887dab2" (UID: "b06f9e75-223d-4cbb-81db-d4dd2887dab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.524811 4896 scope.go:117] "RemoveContainer" containerID="f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.526171 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.526200 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmjxq\" (UniqueName: \"kubernetes.io/projected/b06f9e75-223d-4cbb-81db-d4dd2887dab2-kube-api-access-jmjxq\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.526210 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f9e75-223d-4cbb-81db-d4dd2887dab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.635676 4896 scope.go:117] "RemoveContainer" containerID="3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34" Jan 26 16:04:02 crc kubenswrapper[4896]: E0126 16:04:02.636569 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34\": container with ID starting with 3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34 not found: ID does not exist" containerID="3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.636634 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34"} err="failed to get container status \"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34\": rpc error: code = NotFound desc = could not find container \"3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34\": container with ID starting with 3e4156446fff160cae883d6e4e45fa9ec75cdda0d8de218fb62502a1e9a9bf34 not found: ID does not exist" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.636654 4896 scope.go:117] "RemoveContainer" containerID="f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9" Jan 26 16:04:02 crc kubenswrapper[4896]: E0126 16:04:02.639869 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9\": container with ID starting with f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9 not found: ID does not exist" containerID="f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.639924 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9"} err="failed to get container status \"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9\": rpc error: code = NotFound desc = could not find container \"f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9\": container with ID starting with f293aa4b4ac28b02164b770af49f6af7c00006a9ab2d2c01e1692ddb836bb3e9 not found: ID does not exist" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.825524 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.844648 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.876646 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:02 crc kubenswrapper[4896]: E0126 16:04:02.877328 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.877356 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" Jan 26 16:04:02 crc kubenswrapper[4896]: E0126 16:04:02.877394 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.877404 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.877752 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-log" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.877805 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" containerName="nova-api-api" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.883998 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.889701 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.890139 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.890381 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.896868 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:02 crc kubenswrapper[4896]: I0126 16:04:02.916770 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bg2nz"] Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.163863 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.164097 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.164204 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.164286 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.164726 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pzj\" (UniqueName: \"kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.164864 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267068 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267398 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267664 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46pzj\" (UniqueName: \"kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267722 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267789 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.267879 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.268464 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.274122 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.274311 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.275844 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.290562 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.292084 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.293697 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.301299 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46pzj\" (UniqueName: \"kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.305909 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs\") pod \"nova-api-0\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.431839 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.491309 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.509437 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bg2nz" event={"ID":"9a3debd8-ed65-47af-a900-21fd61003d8b","Type":"ContainerStarted","Data":"25a7fc37b0a424dea293af971762e5b399234dfee81b37bc3d2cd6f2ce3b5ccd"} Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.688076 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:04:03 crc kubenswrapper[4896]: I0126 16:04:03.688719 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="dnsmasq-dns" containerID="cri-o://edd3597a03e34145d5d65bb5f8d2e869a43e55a95cda96b2805733767ecca95b" gracePeriod=10 Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.524332 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.600945 4896 generic.go:334] "Generic (PLEG): container finished" podID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerID="edd3597a03e34145d5d65bb5f8d2e869a43e55a95cda96b2805733767ecca95b" exitCode=0 Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.601053 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" event={"ID":"9f64215e-328e-46a2-b3ee-4518c095ba5f","Type":"ContainerDied","Data":"edd3597a03e34145d5d65bb5f8d2e869a43e55a95cda96b2805733767ecca95b"} Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.603591 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bg2nz" event={"ID":"9a3debd8-ed65-47af-a900-21fd61003d8b","Type":"ContainerStarted","Data":"cf297041519e868a88a3b452f177b093d699aa5e1b3b77b6837c3bee79bba189"} Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.656218 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bg2nz" podStartSLOduration=3.656195973 podStartE2EDuration="3.656195973s" podCreationTimestamp="2026-01-26 16:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:04:04.64413707 +0000 UTC m=+1802.426017463" watchObservedRunningTime="2026-01-26 16:04:04.656195973 +0000 UTC m=+1802.438076366" Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.781498 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06f9e75-223d-4cbb-81db-d4dd2887dab2" path="/var/lib/kubelet/pods/b06f9e75-223d-4cbb-81db-d4dd2887dab2/volumes" Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.824020 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.885693 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.885943 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.886006 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzj7l\" (UniqueName: \"kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.886028 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.886077 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.886147 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0\") pod \"9f64215e-328e-46a2-b3ee-4518c095ba5f\" (UID: \"9f64215e-328e-46a2-b3ee-4518c095ba5f\") " Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.923426 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l" (OuterVolumeSpecName: "kube-api-access-zzj7l") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "kube-api-access-zzj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:04 crc kubenswrapper[4896]: I0126 16:04:04.991358 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzj7l\" (UniqueName: \"kubernetes.io/projected/9f64215e-328e-46a2-b3ee-4518c095ba5f-kube-api-access-zzj7l\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.036163 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.059916 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.059981 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.066360 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config" (OuterVolumeSpecName: "config") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.071447 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f64215e-328e-46a2-b3ee-4518c095ba5f" (UID: "9f64215e-328e-46a2-b3ee-4518c095ba5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.094766 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.094798 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.094813 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.094823 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.094832 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f64215e-328e-46a2-b3ee-4518c095ba5f-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.632780 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerStarted","Data":"57d613f7c6ccce8fcda94edf3d43bbce6449b88468d1865a9766b0530233e323"} Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.633035 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerStarted","Data":"2d3b4ba733c46d40939c803a5f3b02d7829eb506d3d76596b4117a7211616630"} Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.633045 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerStarted","Data":"ebf658d814ac7b62e1f97aeaa1bde5c7bf69b133a623df92f5c9ebcf18b0dea1"} Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.660478 4896 generic.go:334] "Generic (PLEG): container finished" podID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerID="e8feda0c5756658298ac36247615e42688d277c630ccd66c1c4a93374b3998d3" exitCode=0 Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.660561 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerDied","Data":"e8feda0c5756658298ac36247615e42688d277c630ccd66c1c4a93374b3998d3"} Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.663217 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.663197776 podStartE2EDuration="3.663197776s" podCreationTimestamp="2026-01-26 16:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:04:05.660349288 +0000 UTC m=+1803.442229691" watchObservedRunningTime="2026-01-26 16:04:05.663197776 +0000 UTC m=+1803.445078169" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.671781 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" event={"ID":"9f64215e-328e-46a2-b3ee-4518c095ba5f","Type":"ContainerDied","Data":"04c5cfa93d31978aaff485b4c0892f0d6797947e43a91a2c1e61a7d3d26b51d4"} Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.671847 4896 scope.go:117] "RemoveContainer" containerID="edd3597a03e34145d5d65bb5f8d2e869a43e55a95cda96b2805733767ecca95b" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.671803 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-xkmjm" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.699817 4896 scope.go:117] "RemoveContainer" containerID="733e488dbf7f268fceb01b29ba039a5694d26bd1283e479b61276edb7d770c46" Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.729020 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.745834 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-xkmjm"] Jan 26 16:04:05 crc kubenswrapper[4896]: I0126 16:04:05.990506 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159271 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159371 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159558 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159607 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159654 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159752 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.159806 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9pd\" (UniqueName: \"kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd\") pod \"5aa9201a-49a7-4184-9655-2ebc57a9990b\" (UID: \"5aa9201a-49a7-4184-9655-2ebc57a9990b\") " Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.161163 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.161324 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.174946 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts" (OuterVolumeSpecName: "scripts") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.175001 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd" (OuterVolumeSpecName: "kube-api-access-xg9pd") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "kube-api-access-xg9pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.210708 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.263981 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.264026 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.264048 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aa9201a-49a7-4184-9655-2ebc57a9990b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.264059 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.264071 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9pd\" (UniqueName: \"kubernetes.io/projected/5aa9201a-49a7-4184-9655-2ebc57a9990b-kube-api-access-xg9pd\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.275694 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.306208 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data" (OuterVolumeSpecName: "config-data") pod "5aa9201a-49a7-4184-9655-2ebc57a9990b" (UID: "5aa9201a-49a7-4184-9655-2ebc57a9990b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.367332 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.367378 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa9201a-49a7-4184-9655-2ebc57a9990b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.690280 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.690279 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aa9201a-49a7-4184-9655-2ebc57a9990b","Type":"ContainerDied","Data":"362cde032ff5caa40a1bff7cc65d579e5ac2c5bb3928de168f9625e1c4c8f8ad"} Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.690864 4896 scope.go:117] "RemoveContainer" containerID="bc7eb8d6059789f0e9b888c4410cf160a6a627b9e52935ebe366d384621e9385" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.975558 4896 scope.go:117] "RemoveContainer" containerID="b2d02313dca91665b2ea0f78af2d2158e0b68cf5bc4b43cdd286734e1f679c6e" Jan 26 16:04:06 crc kubenswrapper[4896]: I0126 16:04:06.992332 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:06.993016 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.018662 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" path="/var/lib/kubelet/pods/9f64215e-328e-46a2-b3ee-4518c095ba5f/volumes" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.048845 4896 scope.go:117] "RemoveContainer" containerID="a236494699ef2d6167dfaa6efba39a8176df05b676cd994753bcf874df77e2b8" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.058989 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.079096 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.100656 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.100965 4896 scope.go:117] "RemoveContainer" containerID="e8feda0c5756658298ac36247615e42688d277c630ccd66c1c4a93374b3998d3" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.101734 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="proxy-httpd" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.101830 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="proxy-httpd" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.101954 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="sg-core" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.102029 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="sg-core" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.102129 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-central-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.102202 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-central-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.102293 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="init" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.102390 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="init" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.102494 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-notification-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.102603 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-notification-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: E0126 16:04:07.102685 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="dnsmasq-dns" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.102762 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="dnsmasq-dns" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.103278 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-notification-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.103376 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="sg-core" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.103487 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f64215e-328e-46a2-b3ee-4518c095ba5f" containerName="dnsmasq-dns" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.103592 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="proxy-httpd" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.103678 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" containerName="ceilometer-central-agent" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.121052 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.124961 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.129911 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.133630 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261561 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p96\" (UniqueName: \"kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261714 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261853 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261948 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.261968 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.262042 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364098 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364239 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364272 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364345 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364378 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p96\" (UniqueName: \"kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364481 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.364530 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.365066 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.365330 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.372293 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.373326 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.375163 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.376087 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.394262 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p96\" (UniqueName: \"kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96\") pod \"ceilometer-0\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.454612 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:07 crc kubenswrapper[4896]: I0126 16:04:07.948350 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:08 crc kubenswrapper[4896]: I0126 16:04:08.914144 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa9201a-49a7-4184-9655-2ebc57a9990b" path="/var/lib/kubelet/pods/5aa9201a-49a7-4184-9655-2ebc57a9990b/volumes" Jan 26 16:04:08 crc kubenswrapper[4896]: I0126 16:04:08.915759 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerStarted","Data":"ddf5954ba77bc75fed11ed3e917241a05d91db0e031e44dabb73aab15f6c8938"} Jan 26 16:04:09 crc kubenswrapper[4896]: I0126 16:04:09.923412 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerStarted","Data":"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d"} Jan 26 16:04:11 crc kubenswrapper[4896]: I0126 16:04:11.026198 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerStarted","Data":"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5"} Jan 26 16:04:12 crc kubenswrapper[4896]: I0126 16:04:12.049807 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerStarted","Data":"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef"} Jan 26 16:04:12 crc kubenswrapper[4896]: I0126 16:04:12.053178 4896 generic.go:334] "Generic (PLEG): container finished" podID="9a3debd8-ed65-47af-a900-21fd61003d8b" containerID="cf297041519e868a88a3b452f177b093d699aa5e1b3b77b6837c3bee79bba189" exitCode=0 Jan 26 16:04:12 crc kubenswrapper[4896]: I0126 16:04:12.053209 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bg2nz" event={"ID":"9a3debd8-ed65-47af-a900-21fd61003d8b","Type":"ContainerDied","Data":"cf297041519e868a88a3b452f177b093d699aa5e1b3b77b6837c3bee79bba189"} Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.069772 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerStarted","Data":"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f"} Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.070030 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.104645 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.815410441 podStartE2EDuration="6.104622915s" podCreationTimestamp="2026-01-26 16:04:07 +0000 UTC" firstStartedPulling="2026-01-26 16:04:07.956779229 +0000 UTC m=+1805.738659622" lastFinishedPulling="2026-01-26 16:04:12.245991713 +0000 UTC m=+1810.027872096" observedRunningTime="2026-01-26 16:04:13.095873223 +0000 UTC m=+1810.877753626" watchObservedRunningTime="2026-01-26 16:04:13.104622915 +0000 UTC m=+1810.886503308" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.433510 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.433568 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.565539 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.690706 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle\") pod \"9a3debd8-ed65-47af-a900-21fd61003d8b\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.690794 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x978j\" (UniqueName: \"kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j\") pod \"9a3debd8-ed65-47af-a900-21fd61003d8b\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.690848 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts\") pod \"9a3debd8-ed65-47af-a900-21fd61003d8b\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.691050 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") pod \"9a3debd8-ed65-47af-a900-21fd61003d8b\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.698215 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j" (OuterVolumeSpecName: "kube-api-access-x978j") pod "9a3debd8-ed65-47af-a900-21fd61003d8b" (UID: "9a3debd8-ed65-47af-a900-21fd61003d8b"). InnerVolumeSpecName "kube-api-access-x978j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.698566 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts" (OuterVolumeSpecName: "scripts") pod "9a3debd8-ed65-47af-a900-21fd61003d8b" (UID: "9a3debd8-ed65-47af-a900-21fd61003d8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:13 crc kubenswrapper[4896]: E0126 16:04:13.726896 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data podName:9a3debd8-ed65-47af-a900-21fd61003d8b nodeName:}" failed. No retries permitted until 2026-01-26 16:04:14.226861666 +0000 UTC m=+1812.008742059 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data") pod "9a3debd8-ed65-47af-a900-21fd61003d8b" (UID: "9a3debd8-ed65-47af-a900-21fd61003d8b") : error deleting /var/lib/kubelet/pods/9a3debd8-ed65-47af-a900-21fd61003d8b/volume-subpaths: remove /var/lib/kubelet/pods/9a3debd8-ed65-47af-a900-21fd61003d8b/volume-subpaths: no such file or directory Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.730143 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a3debd8-ed65-47af-a900-21fd61003d8b" (UID: "9a3debd8-ed65-47af-a900-21fd61003d8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.794516 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.794557 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x978j\" (UniqueName: \"kubernetes.io/projected/9a3debd8-ed65-47af-a900-21fd61003d8b-kube-api-access-x978j\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:13 crc kubenswrapper[4896]: I0126 16:04:13.794569 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.086866 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bg2nz" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.087332 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bg2nz" event={"ID":"9a3debd8-ed65-47af-a900-21fd61003d8b","Type":"ContainerDied","Data":"25a7fc37b0a424dea293af971762e5b399234dfee81b37bc3d2cd6f2ce3b5ccd"} Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.087368 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a7fc37b0a424dea293af971762e5b399234dfee81b37bc3d2cd6f2ce3b5ccd" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.307272 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") pod \"9a3debd8-ed65-47af-a900-21fd61003d8b\" (UID: \"9a3debd8-ed65-47af-a900-21fd61003d8b\") " Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.314855 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data" (OuterVolumeSpecName: "config-data") pod "9a3debd8-ed65-47af-a900-21fd61003d8b" (UID: "9a3debd8-ed65-47af-a900-21fd61003d8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.359790 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.360147 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" containerName="nova-scheduler-scheduler" containerID="cri-o://7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89" gracePeriod=30 Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.396294 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.396872 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-log" containerID="cri-o://2d3b4ba733c46d40939c803a5f3b02d7829eb506d3d76596b4117a7211616630" gracePeriod=30 Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.397556 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-api" containerID="cri-o://57d613f7c6ccce8fcda94edf3d43bbce6449b88468d1865a9766b0530233e323" gracePeriod=30 Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.418854 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": EOF" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.421317 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3debd8-ed65-47af-a900-21fd61003d8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.442890 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.445034 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.445259 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" containerID="cri-o://6f16ab14840fdc49f2956514c7ee278076ca29f772f5d4bfe182f86fcb8f29fc" gracePeriod=30 Jan 26 16:04:14 crc kubenswrapper[4896]: I0126 16:04:14.446207 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" containerID="cri-o://478b1e16ac6ad57e3cfe3194b5682622c3daba112250171f17f5fee1ceeebc5c" gracePeriod=30 Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.105489 4896 generic.go:334] "Generic (PLEG): container finished" podID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerID="6f16ab14840fdc49f2956514c7ee278076ca29f772f5d4bfe182f86fcb8f29fc" exitCode=143 Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.105871 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerDied","Data":"6f16ab14840fdc49f2956514c7ee278076ca29f772f5d4bfe182f86fcb8f29fc"} Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.110251 4896 generic.go:334] "Generic (PLEG): container finished" podID="2070310b-992d-4994-8615-cd65b2f46d01" containerID="2d3b4ba733c46d40939c803a5f3b02d7829eb506d3d76596b4117a7211616630" exitCode=143 Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.110314 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerDied","Data":"2d3b4ba733c46d40939c803a5f3b02d7829eb506d3d76596b4117a7211616630"} Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.860719 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.965041 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle\") pod \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.965504 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b67tx\" (UniqueName: \"kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx\") pod \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.965638 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data\") pod \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\" (UID: \"01f38b0d-f4f6-4780-a38a-dd8b97f38371\") " Jan 26 16:04:15 crc kubenswrapper[4896]: I0126 16:04:15.991213 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx" (OuterVolumeSpecName: "kube-api-access-b67tx") pod "01f38b0d-f4f6-4780-a38a-dd8b97f38371" (UID: "01f38b0d-f4f6-4780-a38a-dd8b97f38371"). InnerVolumeSpecName "kube-api-access-b67tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.007089 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data" (OuterVolumeSpecName: "config-data") pod "01f38b0d-f4f6-4780-a38a-dd8b97f38371" (UID: "01f38b0d-f4f6-4780-a38a-dd8b97f38371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.038400 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01f38b0d-f4f6-4780-a38a-dd8b97f38371" (UID: "01f38b0d-f4f6-4780-a38a-dd8b97f38371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.072410 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b67tx\" (UniqueName: \"kubernetes.io/projected/01f38b0d-f4f6-4780-a38a-dd8b97f38371-kube-api-access-b67tx\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.072445 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.072455 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f38b0d-f4f6-4780-a38a-dd8b97f38371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.142108 4896 generic.go:334] "Generic (PLEG): container finished" podID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" containerID="7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89" exitCode=0 Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.142156 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01f38b0d-f4f6-4780-a38a-dd8b97f38371","Type":"ContainerDied","Data":"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89"} Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.142182 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01f38b0d-f4f6-4780-a38a-dd8b97f38371","Type":"ContainerDied","Data":"f5e811b9b79dbcb450e19aefb07b5723cde371831e5941f05036a2692890ab3a"} Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.142209 4896 scope.go:117] "RemoveContainer" containerID="7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.142419 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.203937 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.215854 4896 scope.go:117] "RemoveContainer" containerID="7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89" Jan 26 16:04:16 crc kubenswrapper[4896]: E0126 16:04:16.221315 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89\": container with ID starting with 7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89 not found: ID does not exist" containerID="7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.221366 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89"} err="failed to get container status \"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89\": rpc error: code = NotFound desc = could not find container \"7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89\": container with ID starting with 7ff0f101af723ef2aa9921da1c0f156beb9d23ac88d7b09b006508e33ce2ed89 not found: ID does not exist" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.225037 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.264984 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:16 crc kubenswrapper[4896]: E0126 16:04:16.265871 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" containerName="nova-scheduler-scheduler" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.265966 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" containerName="nova-scheduler-scheduler" Jan 26 16:04:16 crc kubenswrapper[4896]: E0126 16:04:16.266051 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3debd8-ed65-47af-a900-21fd61003d8b" containerName="nova-manage" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.266115 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3debd8-ed65-47af-a900-21fd61003d8b" containerName="nova-manage" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.266475 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" containerName="nova-scheduler-scheduler" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.266681 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3debd8-ed65-47af-a900-21fd61003d8b" containerName="nova-manage" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.267624 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.270298 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.289995 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.386792 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwm8\" (UniqueName: \"kubernetes.io/projected/eee850ba-ed53-45e1-9ae2-ead8cdf89877-kube-api-access-rgwm8\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.387238 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.387689 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-config-data\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.489808 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-config-data\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.490306 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwm8\" (UniqueName: \"kubernetes.io/projected/eee850ba-ed53-45e1-9ae2-ead8cdf89877-kube-api-access-rgwm8\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.490359 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.495498 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.496190 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee850ba-ed53-45e1-9ae2-ead8cdf89877-config-data\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.511709 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwm8\" (UniqueName: \"kubernetes.io/projected/eee850ba-ed53-45e1-9ae2-ead8cdf89877-kube-api-access-rgwm8\") pod \"nova-scheduler-0\" (UID: \"eee850ba-ed53-45e1-9ae2-ead8cdf89877\") " pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.597405 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 16:04:16 crc kubenswrapper[4896]: I0126 16:04:16.779463 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f38b0d-f4f6-4780-a38a-dd8b97f38371" path="/var/lib/kubelet/pods/01f38b0d-f4f6-4780-a38a-dd8b97f38371/volumes" Jan 26 16:04:17 crc kubenswrapper[4896]: I0126 16:04:17.355081 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 16:04:17 crc kubenswrapper[4896]: I0126 16:04:17.867389 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:46922->10.217.0.254:8775: read: connection reset by peer" Jan 26 16:04:17 crc kubenswrapper[4896]: I0126 16:04:17.867422 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:46934->10.217.0.254:8775: read: connection reset by peer" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.166932 4896 generic.go:334] "Generic (PLEG): container finished" podID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerID="478b1e16ac6ad57e3cfe3194b5682622c3daba112250171f17f5fee1ceeebc5c" exitCode=0 Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.167013 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerDied","Data":"478b1e16ac6ad57e3cfe3194b5682622c3daba112250171f17f5fee1ceeebc5c"} Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.169029 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eee850ba-ed53-45e1-9ae2-ead8cdf89877","Type":"ContainerStarted","Data":"bb895afdbefe381381d046cddfcfb9edb336d24ed00075b2621a4476bb8550b9"} Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.169062 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eee850ba-ed53-45e1-9ae2-ead8cdf89877","Type":"ContainerStarted","Data":"6e378e06bbb30c1668f356426d71f43e49bd530ea4a6c160c46e1f91e1e2a82c"} Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.188446 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.188431005 podStartE2EDuration="2.188431005s" podCreationTimestamp="2026-01-26 16:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:04:18.183270284 +0000 UTC m=+1815.965150677" watchObservedRunningTime="2026-01-26 16:04:18.188431005 +0000 UTC m=+1815.970311398" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.586212 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.632752 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data\") pod \"6bf58a4a-039f-4365-b64e-e3b81212de22\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.632839 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwmjw\" (UniqueName: \"kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw\") pod \"6bf58a4a-039f-4365-b64e-e3b81212de22\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.632989 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs\") pod \"6bf58a4a-039f-4365-b64e-e3b81212de22\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.633017 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs\") pod \"6bf58a4a-039f-4365-b64e-e3b81212de22\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.633047 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle\") pod \"6bf58a4a-039f-4365-b64e-e3b81212de22\" (UID: \"6bf58a4a-039f-4365-b64e-e3b81212de22\") " Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.639867 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs" (OuterVolumeSpecName: "logs") pod "6bf58a4a-039f-4365-b64e-e3b81212de22" (UID: "6bf58a4a-039f-4365-b64e-e3b81212de22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.640274 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw" (OuterVolumeSpecName: "kube-api-access-vwmjw") pod "6bf58a4a-039f-4365-b64e-e3b81212de22" (UID: "6bf58a4a-039f-4365-b64e-e3b81212de22"). InnerVolumeSpecName "kube-api-access-vwmjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.679813 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bf58a4a-039f-4365-b64e-e3b81212de22" (UID: "6bf58a4a-039f-4365-b64e-e3b81212de22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.682146 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data" (OuterVolumeSpecName: "config-data") pod "6bf58a4a-039f-4365-b64e-e3b81212de22" (UID: "6bf58a4a-039f-4365-b64e-e3b81212de22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.736359 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.736400 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwmjw\" (UniqueName: \"kubernetes.io/projected/6bf58a4a-039f-4365-b64e-e3b81212de22-kube-api-access-vwmjw\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.737903 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf58a4a-039f-4365-b64e-e3b81212de22-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.738024 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.748141 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6bf58a4a-039f-4365-b64e-e3b81212de22" (UID: "6bf58a4a-039f-4365-b64e-e3b81212de22"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.760303 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:04:18 crc kubenswrapper[4896]: E0126 16:04:18.760833 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:04:18 crc kubenswrapper[4896]: I0126 16:04:18.873965 4896 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf58a4a-039f-4365-b64e-e3b81212de22-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.183958 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.184174 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bf58a4a-039f-4365-b64e-e3b81212de22","Type":"ContainerDied","Data":"632d78657d7b4109e48d5784d55f84027060be78dba178057cf13514f5c91c0e"} Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.184242 4896 scope.go:117] "RemoveContainer" containerID="478b1e16ac6ad57e3cfe3194b5682622c3daba112250171f17f5fee1ceeebc5c" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.221467 4896 scope.go:117] "RemoveContainer" containerID="6f16ab14840fdc49f2956514c7ee278076ca29f772f5d4bfe182f86fcb8f29fc" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.255659 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.337954 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.352849 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:19 crc kubenswrapper[4896]: E0126 16:04:19.353693 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.353724 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" Jan 26 16:04:19 crc kubenswrapper[4896]: E0126 16:04:19.353753 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.353762 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.354063 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-log" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.354096 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" containerName="nova-metadata-metadata" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.356125 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.359481 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.360063 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.366964 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.397948 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.398039 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgrc\" (UniqueName: \"kubernetes.io/projected/d970e90b-294e-47eb-81eb-a5203390a465-kube-api-access-cwgrc\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.398433 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d970e90b-294e-47eb-81eb-a5203390a465-logs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.398815 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-config-data\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.398911 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.500929 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-config-data\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.500992 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.501055 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.501081 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgrc\" (UniqueName: \"kubernetes.io/projected/d970e90b-294e-47eb-81eb-a5203390a465-kube-api-access-cwgrc\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.501239 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d970e90b-294e-47eb-81eb-a5203390a465-logs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.501690 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d970e90b-294e-47eb-81eb-a5203390a465-logs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.505764 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.506171 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.506285 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d970e90b-294e-47eb-81eb-a5203390a465-config-data\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.523781 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgrc\" (UniqueName: \"kubernetes.io/projected/d970e90b-294e-47eb-81eb-a5203390a465-kube-api-access-cwgrc\") pod \"nova-metadata-0\" (UID: \"d970e90b-294e-47eb-81eb-a5203390a465\") " pod="openstack/nova-metadata-0" Jan 26 16:04:19 crc kubenswrapper[4896]: I0126 16:04:19.681187 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 16:04:20 crc kubenswrapper[4896]: I0126 16:04:20.298678 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 16:04:20 crc kubenswrapper[4896]: W0126 16:04:20.310670 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd970e90b_294e_47eb_81eb_a5203390a465.slice/crio-fcdd0898fb508c28a40249e18809733a5bf05d89a6d6c9549e4a987e22bc2d08 WatchSource:0}: Error finding container fcdd0898fb508c28a40249e18809733a5bf05d89a6d6c9549e4a987e22bc2d08: Status 404 returned error can't find the container with id fcdd0898fb508c28a40249e18809733a5bf05d89a6d6c9549e4a987e22bc2d08 Jan 26 16:04:20 crc kubenswrapper[4896]: I0126 16:04:20.775261 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf58a4a-039f-4365-b64e-e3b81212de22" path="/var/lib/kubelet/pods/6bf58a4a-039f-4365-b64e-e3b81212de22/volumes" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.213029 4896 generic.go:334] "Generic (PLEG): container finished" podID="2070310b-992d-4994-8615-cd65b2f46d01" containerID="57d613f7c6ccce8fcda94edf3d43bbce6449b88468d1865a9766b0530233e323" exitCode=0 Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.213088 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerDied","Data":"57d613f7c6ccce8fcda94edf3d43bbce6449b88468d1865a9766b0530233e323"} Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.215494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d970e90b-294e-47eb-81eb-a5203390a465","Type":"ContainerStarted","Data":"2646900afb5d471f4f7b639429a0cde1a86c8bd5f1c08da57e5f101451e930bd"} Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.215530 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d970e90b-294e-47eb-81eb-a5203390a465","Type":"ContainerStarted","Data":"258d1e1728a9d15a14d3ef23b40f9498da319379652eceb44133c75a1bf97221"} Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.215548 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d970e90b-294e-47eb-81eb-a5203390a465","Type":"ContainerStarted","Data":"fcdd0898fb508c28a40249e18809733a5bf05d89a6d6c9549e4a987e22bc2d08"} Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.257953 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.257933578 podStartE2EDuration="2.257933578s" podCreationTimestamp="2026-01-26 16:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:04:21.240762873 +0000 UTC m=+1819.022643276" watchObservedRunningTime="2026-01-26 16:04:21.257933578 +0000 UTC m=+1819.039813971" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.466730 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497459 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497511 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497631 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46pzj\" (UniqueName: \"kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497671 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497750 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.497939 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle\") pod \"2070310b-992d-4994-8615-cd65b2f46d01\" (UID: \"2070310b-992d-4994-8615-cd65b2f46d01\") " Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.506751 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs" (OuterVolumeSpecName: "logs") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.514642 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj" (OuterVolumeSpecName: "kube-api-access-46pzj") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "kube-api-access-46pzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.545749 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data" (OuterVolumeSpecName: "config-data") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.546294 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.582270 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.598881 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.604929 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2070310b-992d-4994-8615-cd65b2f46d01" (UID: "2070310b-992d-4994-8615-cd65b2f46d01"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609385 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609644 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609758 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609842 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46pzj\" (UniqueName: \"kubernetes.io/projected/2070310b-992d-4994-8615-cd65b2f46d01-kube-api-access-46pzj\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609911 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2070310b-992d-4994-8615-cd65b2f46d01-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:21 crc kubenswrapper[4896]: I0126 16:04:21.609976 4896 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2070310b-992d-4994-8615-cd65b2f46d01-logs\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.230570 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2070310b-992d-4994-8615-cd65b2f46d01","Type":"ContainerDied","Data":"ebf658d814ac7b62e1f97aeaa1bde5c7bf69b133a623df92f5c9ebcf18b0dea1"} Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.230684 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.230904 4896 scope.go:117] "RemoveContainer" containerID="57d613f7c6ccce8fcda94edf3d43bbce6449b88468d1865a9766b0530233e323" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.295552 4896 scope.go:117] "RemoveContainer" containerID="2d3b4ba733c46d40939c803a5f3b02d7829eb506d3d76596b4117a7211616630" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.311058 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.334056 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.345721 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:22 crc kubenswrapper[4896]: E0126 16:04:22.346385 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-log" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.346407 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-log" Jan 26 16:04:22 crc kubenswrapper[4896]: E0126 16:04:22.346436 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-api" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.346442 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-api" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.346684 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-api" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.346725 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2070310b-992d-4994-8615-cd65b2f46d01" containerName="nova-api-log" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.348235 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.351905 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.353083 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.353103 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.388906 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.489782 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7fs\" (UniqueName: \"kubernetes.io/projected/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-kube-api-access-9s7fs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.489926 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-config-data\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.489994 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-logs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.490024 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.490110 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.490236 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.592301 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.592399 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.592508 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s7fs\" (UniqueName: \"kubernetes.io/projected/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-kube-api-access-9s7fs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.592672 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-config-data\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.593215 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-logs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.593635 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-logs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.593688 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.599216 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-config-data\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.599523 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.603874 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.609726 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.610667 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s7fs\" (UniqueName: \"kubernetes.io/projected/4b74aebe-ad4e-4eca-b3fb-53194ebf847a-kube-api-access-9s7fs\") pod \"nova-api-0\" (UID: \"4b74aebe-ad4e-4eca-b3fb-53194ebf847a\") " pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.732908 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 16:04:22 crc kubenswrapper[4896]: I0126 16:04:22.809346 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2070310b-992d-4994-8615-cd65b2f46d01" path="/var/lib/kubelet/pods/2070310b-992d-4994-8615-cd65b2f46d01/volumes" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.165289 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.246702 4896 generic.go:334] "Generic (PLEG): container finished" podID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerID="c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2" exitCode=137 Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.246781 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerDied","Data":"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2"} Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.246825 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6146c84f-0372-46f8-86e2-da5186ab20bf","Type":"ContainerDied","Data":"5af87e4ad6ffe4cf9f1e1fe24fe4c712670e610f46d2b86562b76cd9f2de964e"} Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.246841 4896 scope.go:117] "RemoveContainer" containerID="c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.247076 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.249416 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle\") pod \"6146c84f-0372-46f8-86e2-da5186ab20bf\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.249466 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnm7m\" (UniqueName: \"kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m\") pod \"6146c84f-0372-46f8-86e2-da5186ab20bf\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.249653 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts\") pod \"6146c84f-0372-46f8-86e2-da5186ab20bf\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.260202 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m" (OuterVolumeSpecName: "kube-api-access-hnm7m") pod "6146c84f-0372-46f8-86e2-da5186ab20bf" (UID: "6146c84f-0372-46f8-86e2-da5186ab20bf"). InnerVolumeSpecName "kube-api-access-hnm7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.272106 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts" (OuterVolumeSpecName: "scripts") pod "6146c84f-0372-46f8-86e2-da5186ab20bf" (UID: "6146c84f-0372-46f8-86e2-da5186ab20bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.311421 4896 scope.go:117] "RemoveContainer" containerID="fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.353820 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data\") pod \"6146c84f-0372-46f8-86e2-da5186ab20bf\" (UID: \"6146c84f-0372-46f8-86e2-da5186ab20bf\") " Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.358246 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnm7m\" (UniqueName: \"kubernetes.io/projected/6146c84f-0372-46f8-86e2-da5186ab20bf-kube-api-access-hnm7m\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.358281 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.372386 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 16:04:23 crc kubenswrapper[4896]: W0126 16:04:23.376239 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b74aebe_ad4e_4eca_b3fb_53194ebf847a.slice/crio-f523641651bc3ca34f620aac7f0fa9dd832222984d9116528dbae7eeade9e08f WatchSource:0}: Error finding container f523641651bc3ca34f620aac7f0fa9dd832222984d9116528dbae7eeade9e08f: Status 404 returned error can't find the container with id f523641651bc3ca34f620aac7f0fa9dd832222984d9116528dbae7eeade9e08f Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.434843 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6146c84f-0372-46f8-86e2-da5186ab20bf" (UID: "6146c84f-0372-46f8-86e2-da5186ab20bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.461674 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.486045 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data" (OuterVolumeSpecName: "config-data") pod "6146c84f-0372-46f8-86e2-da5186ab20bf" (UID: "6146c84f-0372-46f8-86e2-da5186ab20bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.736240 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6146c84f-0372-46f8-86e2-da5186ab20bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.790775 4896 scope.go:117] "RemoveContainer" containerID="e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.824967 4896 scope.go:117] "RemoveContainer" containerID="b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.846744 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.859750 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.861706 4896 scope.go:117] "RemoveContainer" containerID="c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.863457 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2\": container with ID starting with c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2 not found: ID does not exist" containerID="c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.863508 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2"} err="failed to get container status \"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2\": rpc error: code = NotFound desc = could not find container \"c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2\": container with ID starting with c83c951311fb49f64e5993931600f144ef37335364a3c5e2b1d4811f1254cee2 not found: ID does not exist" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.863537 4896 scope.go:117] "RemoveContainer" containerID="fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.863944 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488\": container with ID starting with fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488 not found: ID does not exist" containerID="fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.863980 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488"} err="failed to get container status \"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488\": rpc error: code = NotFound desc = could not find container \"fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488\": container with ID starting with fbc88616d1618c8194bc32bcc5c94d4e852ed3928bda6f8e48fff07267ed0488 not found: ID does not exist" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.864003 4896 scope.go:117] "RemoveContainer" containerID="e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.864294 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692\": container with ID starting with e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692 not found: ID does not exist" containerID="e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.864328 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692"} err="failed to get container status \"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692\": rpc error: code = NotFound desc = could not find container \"e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692\": container with ID starting with e80dd16be24f1c656284a2238d719b0eb6c0de27227196b21224f2d7729e1692 not found: ID does not exist" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.864347 4896 scope.go:117] "RemoveContainer" containerID="b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.864649 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26\": container with ID starting with b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26 not found: ID does not exist" containerID="b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.864684 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26"} err="failed to get container status \"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26\": rpc error: code = NotFound desc = could not find container \"b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26\": container with ID starting with b903d5045ffecf6552d0270041561a960d63b8478844ecf067b60e432b3d1b26 not found: ID does not exist" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.883379 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.884107 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-evaluator" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884132 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-evaluator" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.884154 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-listener" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884161 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-listener" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.884180 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-notifier" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884186 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-notifier" Jan 26 16:04:23 crc kubenswrapper[4896]: E0126 16:04:23.884198 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-api" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884204 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-api" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884424 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-notifier" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884449 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-evaluator" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884469 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-listener" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.884485 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" containerName="aodh-api" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.887017 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.889733 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.889858 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.889962 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.890093 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-b2ntx" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.890366 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.896950 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.946743 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.946823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xl2\" (UniqueName: \"kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.946857 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.947610 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.947711 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:23 crc kubenswrapper[4896]: I0126 16:04:23.947769 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049562 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049647 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049774 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049817 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xl2\" (UniqueName: \"kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049889 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.049971 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.053152 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.053657 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.055507 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.056603 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.056909 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.077087 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xl2\" (UniqueName: \"kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2\") pod \"aodh-0\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.217597 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.265423 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b74aebe-ad4e-4eca-b3fb-53194ebf847a","Type":"ContainerStarted","Data":"36f605c4090527b7c96dda25a2403d5318591523fe5d052cb1c1f8075144efd2"} Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.265482 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b74aebe-ad4e-4eca-b3fb-53194ebf847a","Type":"ContainerStarted","Data":"9b8cc4ed406a82a18b7cdb08ef91b4a89950b31605d2ecf894bcda31ed4bfae7"} Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.265497 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b74aebe-ad4e-4eca-b3fb-53194ebf847a","Type":"ContainerStarted","Data":"f523641651bc3ca34f620aac7f0fa9dd832222984d9116528dbae7eeade9e08f"} Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.297609 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.297573135 podStartE2EDuration="2.297573135s" podCreationTimestamp="2026-01-26 16:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:04:24.285074518 +0000 UTC m=+1822.066954911" watchObservedRunningTime="2026-01-26 16:04:24.297573135 +0000 UTC m=+1822.079453528" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.711396 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.792402 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.823559 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6146c84f-0372-46f8-86e2-da5186ab20bf" path="/var/lib/kubelet/pods/6146c84f-0372-46f8-86e2-da5186ab20bf/volumes" Jan 26 16:04:24 crc kubenswrapper[4896]: I0126 16:04:24.855076 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:04:25 crc kubenswrapper[4896]: I0126 16:04:25.290811 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerStarted","Data":"878641bfbd0450da6cece2b14e4fd463a5aaa495ec06fcce268ac61dfb480f46"} Jan 26 16:04:26 crc kubenswrapper[4896]: I0126 16:04:26.319555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerStarted","Data":"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496"} Jan 26 16:04:26 crc kubenswrapper[4896]: I0126 16:04:26.599424 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 16:04:26 crc kubenswrapper[4896]: I0126 16:04:26.634324 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 16:04:27 crc kubenswrapper[4896]: I0126 16:04:27.336746 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerStarted","Data":"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6"} Jan 26 16:04:27 crc kubenswrapper[4896]: I0126 16:04:27.337081 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerStarted","Data":"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3"} Jan 26 16:04:27 crc kubenswrapper[4896]: I0126 16:04:27.371834 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 16:04:28 crc kubenswrapper[4896]: I0126 16:04:28.434658 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerStarted","Data":"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56"} Jan 26 16:04:28 crc kubenswrapper[4896]: I0126 16:04:28.479702 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.795075752 podStartE2EDuration="5.47967543s" podCreationTimestamp="2026-01-26 16:04:23 +0000 UTC" firstStartedPulling="2026-01-26 16:04:24.860884913 +0000 UTC m=+1822.642765306" lastFinishedPulling="2026-01-26 16:04:27.545484591 +0000 UTC m=+1825.327364984" observedRunningTime="2026-01-26 16:04:28.467939123 +0000 UTC m=+1826.249819516" watchObservedRunningTime="2026-01-26 16:04:28.47967543 +0000 UTC m=+1826.261555823" Jan 26 16:04:29 crc kubenswrapper[4896]: I0126 16:04:29.682254 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:04:29 crc kubenswrapper[4896]: I0126 16:04:29.682659 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 16:04:30 crc kubenswrapper[4896]: I0126 16:04:30.697796 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d970e90b-294e-47eb-81eb-a5203390a465" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:30 crc kubenswrapper[4896]: I0126 16:04:30.697844 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d970e90b-294e-47eb-81eb-a5203390a465" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.10:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:31 crc kubenswrapper[4896]: I0126 16:04:31.759656 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:04:31 crc kubenswrapper[4896]: E0126 16:04:31.760202 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:04:32 crc kubenswrapper[4896]: I0126 16:04:32.733819 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:04:32 crc kubenswrapper[4896]: I0126 16:04:32.734178 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 16:04:34 crc kubenswrapper[4896]: I0126 16:04:34.226859 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b74aebe-ad4e-4eca-b3fb-53194ebf847a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:34 crc kubenswrapper[4896]: I0126 16:04:34.227026 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b74aebe-ad4e-4eca-b3fb-53194ebf847a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:34 crc kubenswrapper[4896]: I0126 16:04:34.877093 4896 scope.go:117] "RemoveContainer" containerID="8f6d2d7d1630ec64151dba49488c4a8c070d1a03fc83b0b9640daaf25d6926f5" Jan 26 16:04:34 crc kubenswrapper[4896]: I0126 16:04:34.908866 4896 scope.go:117] "RemoveContainer" containerID="7e3eeeb878a0f01669a696db0f4389e95731f56a14db6f3fcd9ece9265043033" Jan 26 16:04:37 crc kubenswrapper[4896]: I0126 16:04:37.555035 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:04:39 crc kubenswrapper[4896]: I0126 16:04:39.691867 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:04:39 crc kubenswrapper[4896]: I0126 16:04:39.694217 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 16:04:39 crc kubenswrapper[4896]: I0126 16:04:39.698403 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:04:40 crc kubenswrapper[4896]: I0126 16:04:40.555917 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.330363 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.330832 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerName="kube-state-metrics" containerID="cri-o://86a55a27c1373ebace03ff2bab44b3b29b579e2f54a58f9bfb8d2f013fdbff97" gracePeriod=30 Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.599195 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.599877 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="e8f4800c-3439-4de1-a142-90b4d84d4c03" containerName="mysqld-exporter" containerID="cri-o://7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d" gracePeriod=30 Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.653170 4896 generic.go:334] "Generic (PLEG): container finished" podID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerID="86a55a27c1373ebace03ff2bab44b3b29b579e2f54a58f9bfb8d2f013fdbff97" exitCode=2 Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.654408 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"68bd2b73-99a8-427d-a4bf-2648d7580be8","Type":"ContainerDied","Data":"86a55a27c1373ebace03ff2bab44b3b29b579e2f54a58f9bfb8d2f013fdbff97"} Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.752442 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.753139 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.757116 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.817914 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.889138 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:04:42 crc kubenswrapper[4896]: I0126 16:04:42.986564 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4qcl\" (UniqueName: \"kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl\") pod \"68bd2b73-99a8-427d-a4bf-2648d7580be8\" (UID: \"68bd2b73-99a8-427d-a4bf-2648d7580be8\") " Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.001479 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl" (OuterVolumeSpecName: "kube-api-access-l4qcl") pod "68bd2b73-99a8-427d-a4bf-2648d7580be8" (UID: "68bd2b73-99a8-427d-a4bf-2648d7580be8"). InnerVolumeSpecName "kube-api-access-l4qcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.089719 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4qcl\" (UniqueName: \"kubernetes.io/projected/68bd2b73-99a8-427d-a4bf-2648d7580be8-kube-api-access-l4qcl\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.331047 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.396008 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjlw\" (UniqueName: \"kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw\") pod \"e8f4800c-3439-4de1-a142-90b4d84d4c03\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.396073 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle\") pod \"e8f4800c-3439-4de1-a142-90b4d84d4c03\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.396159 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data\") pod \"e8f4800c-3439-4de1-a142-90b4d84d4c03\" (UID: \"e8f4800c-3439-4de1-a142-90b4d84d4c03\") " Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.400742 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw" (OuterVolumeSpecName: "kube-api-access-pdjlw") pod "e8f4800c-3439-4de1-a142-90b4d84d4c03" (UID: "e8f4800c-3439-4de1-a142-90b4d84d4c03"). InnerVolumeSpecName "kube-api-access-pdjlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.455329 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f4800c-3439-4de1-a142-90b4d84d4c03" (UID: "e8f4800c-3439-4de1-a142-90b4d84d4c03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.491160 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data" (OuterVolumeSpecName: "config-data") pod "e8f4800c-3439-4de1-a142-90b4d84d4c03" (UID: "e8f4800c-3439-4de1-a142-90b4d84d4c03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.500285 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdjlw\" (UniqueName: \"kubernetes.io/projected/e8f4800c-3439-4de1-a142-90b4d84d4c03-kube-api-access-pdjlw\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.500312 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.500322 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f4800c-3439-4de1-a142-90b4d84d4c03-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.667490 4896 generic.go:334] "Generic (PLEG): container finished" podID="e8f4800c-3439-4de1-a142-90b4d84d4c03" containerID="7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d" exitCode=2 Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.667829 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e8f4800c-3439-4de1-a142-90b4d84d4c03","Type":"ContainerDied","Data":"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d"} Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.667860 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e8f4800c-3439-4de1-a142-90b4d84d4c03","Type":"ContainerDied","Data":"5485815550eea6a6e9c366fd067d7192755dae5be87cc6af1854508be424e270"} Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.667877 4896 scope.go:117] "RemoveContainer" containerID="7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.668041 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.677756 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.680081 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"68bd2b73-99a8-427d-a4bf-2648d7580be8","Type":"ContainerDied","Data":"31d5b0ab67b035817394ac73de0ba888304bc8aea87a948947e65d6916753b12"} Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.680165 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.708164 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.720099 4896 scope.go:117] "RemoveContainer" containerID="7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d" Jan 26 16:04:43 crc kubenswrapper[4896]: E0126 16:04:43.722706 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d\": container with ID starting with 7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d not found: ID does not exist" containerID="7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.722890 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d"} err="failed to get container status \"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d\": rpc error: code = NotFound desc = could not find container \"7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d\": container with ID starting with 7a59041a3d1ac57fddb5ad09ac75ab5363b5c968e5b1732f18f346000e26e57d not found: ID does not exist" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.723003 4896 scope.go:117] "RemoveContainer" containerID="86a55a27c1373ebace03ff2bab44b3b29b579e2f54a58f9bfb8d2f013fdbff97" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.733842 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.744510 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.758072 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: E0126 16:04:43.758975 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerName="kube-state-metrics" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.759098 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerName="kube-state-metrics" Jan 26 16:04:43 crc kubenswrapper[4896]: E0126 16:04:43.759223 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f4800c-3439-4de1-a142-90b4d84d4c03" containerName="mysqld-exporter" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.759304 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f4800c-3439-4de1-a142-90b4d84d4c03" containerName="mysqld-exporter" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.759723 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f4800c-3439-4de1-a142-90b4d84d4c03" containerName="mysqld-exporter" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.759848 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerName="kube-state-metrics" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.761285 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.764854 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.765064 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.793239 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.816901 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.827885 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.831913 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.832153 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7fx\" (UniqueName: \"kubernetes.io/projected/42644605-6128-40f7-9fd7-84741e8b0ea9-kube-api-access-gm7fx\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.832393 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-config-data\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.832527 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.857659 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.859434 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.862226 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.862469 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.878001 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm7fx\" (UniqueName: \"kubernetes.io/projected/42644605-6128-40f7-9fd7-84741e8b0ea9-kube-api-access-gm7fx\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936713 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936749 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdsc\" (UniqueName: \"kubernetes.io/projected/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-api-access-kjdsc\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936826 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-config-data\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936873 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936917 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.936970 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.937845 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.942146 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.942175 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-config-data\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.943363 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42644605-6128-40f7-9fd7-84741e8b0ea9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:43 crc kubenswrapper[4896]: I0126 16:04:43.957906 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm7fx\" (UniqueName: \"kubernetes.io/projected/42644605-6128-40f7-9fd7-84741e8b0ea9-kube-api-access-gm7fx\") pod \"mysqld-exporter-0\" (UID: \"42644605-6128-40f7-9fd7-84741e8b0ea9\") " pod="openstack/mysqld-exporter-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.044255 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.044452 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.044474 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdsc\" (UniqueName: \"kubernetes.io/projected/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-api-access-kjdsc\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.044559 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.049678 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.050308 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.050814 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.070457 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdsc\" (UniqueName: \"kubernetes.io/projected/eb645182-c3d8-4201-ab95-a2f26151c99f-kube-api-access-kjdsc\") pod \"kube-state-metrics-0\" (UID: \"eb645182-c3d8-4201-ab95-a2f26151c99f\") " pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.090359 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.190423 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.355716 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.363770 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.387904 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.460323 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.460426 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.460573 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjbd\" (UniqueName: \"kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.565279 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnjbd\" (UniqueName: \"kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.565395 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.565448 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.565968 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.566185 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.635496 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnjbd\" (UniqueName: \"kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd\") pod \"community-operators-9nwzx\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.710334 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.716621 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.727677 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"42644605-6128-40f7-9fd7-84741e8b0ea9","Type":"ContainerStarted","Data":"241b3e675b2bf5d231c248ad9fe827d886905e907b86f6299023730d006dec33"} Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.800542 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" path="/var/lib/kubelet/pods/68bd2b73-99a8-427d-a4bf-2648d7580be8/volumes" Jan 26 16:04:44 crc kubenswrapper[4896]: I0126 16:04:44.818925 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f4800c-3439-4de1-a142-90b4d84d4c03" path="/var/lib/kubelet/pods/e8f4800c-3439-4de1-a142-90b4d84d4c03/volumes" Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.011595 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.350703 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:04:45 crc kubenswrapper[4896]: W0126 16:04:45.363344 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda78ca5ca_6359_4e2f_ae2e_f71ca31bea8b.slice/crio-df2e3516857ca8f4f8f0a8dc4aa9b7736fa7198f2026e1260029fc42a4355cdb WatchSource:0}: Error finding container df2e3516857ca8f4f8f0a8dc4aa9b7736fa7198f2026e1260029fc42a4355cdb: Status 404 returned error can't find the container with id df2e3516857ca8f4f8f0a8dc4aa9b7736fa7198f2026e1260029fc42a4355cdb Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.649305 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.649654 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-central-agent" containerID="cri-o://e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d" gracePeriod=30 Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.649804 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="proxy-httpd" containerID="cri-o://c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f" gracePeriod=30 Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.649857 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="sg-core" containerID="cri-o://0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef" gracePeriod=30 Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.649903 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-notification-agent" containerID="cri-o://985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5" gracePeriod=30 Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.759708 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:04:45 crc kubenswrapper[4896]: E0126 16:04:45.760326 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.778922 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb645182-c3d8-4201-ab95-a2f26151c99f","Type":"ContainerStarted","Data":"526058f56199a6a4e152a98a5f64bb65dbd39a518c2c55b0e74a1c2bbb223ec2"} Jan 26 16:04:45 crc kubenswrapper[4896]: I0126 16:04:45.780648 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerStarted","Data":"df2e3516857ca8f4f8f0a8dc4aa9b7736fa7198f2026e1260029fc42a4355cdb"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.794533 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb645182-c3d8-4201-ab95-a2f26151c99f","Type":"ContainerStarted","Data":"a817ca4c5b64f3a8f452c06e9c2c3ae83c2e61b7af8ccbccd80007b746ac049d"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.794883 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799693 4896 generic.go:334] "Generic (PLEG): container finished" podID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerID="c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f" exitCode=0 Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799722 4896 generic.go:334] "Generic (PLEG): container finished" podID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerID="0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef" exitCode=2 Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799730 4896 generic.go:334] "Generic (PLEG): container finished" podID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerID="e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d" exitCode=0 Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799785 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerDied","Data":"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799839 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerDied","Data":"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.799853 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerDied","Data":"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.802347 4896 generic.go:334] "Generic (PLEG): container finished" podID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerID="8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9" exitCode=0 Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.802396 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerDied","Data":"8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.804633 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"42644605-6128-40f7-9fd7-84741e8b0ea9","Type":"ContainerStarted","Data":"58033dfe423c5c6caa4acb2f70db2f35b846b70d5bc41384068b16afdf57aad7"} Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.820201 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.443654619 podStartE2EDuration="3.820180163s" podCreationTimestamp="2026-01-26 16:04:43 +0000 UTC" firstStartedPulling="2026-01-26 16:04:45.013449118 +0000 UTC m=+1842.795329511" lastFinishedPulling="2026-01-26 16:04:45.389974662 +0000 UTC m=+1843.171855055" observedRunningTime="2026-01-26 16:04:46.815492185 +0000 UTC m=+1844.597372608" watchObservedRunningTime="2026-01-26 16:04:46.820180163 +0000 UTC m=+1844.602060556" Jan 26 16:04:46 crc kubenswrapper[4896]: I0126 16:04:46.851006 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.066720095 podStartE2EDuration="3.850979064s" podCreationTimestamp="2026-01-26 16:04:43 +0000 UTC" firstStartedPulling="2026-01-26 16:04:44.696177985 +0000 UTC m=+1842.478058368" lastFinishedPulling="2026-01-26 16:04:45.480436944 +0000 UTC m=+1843.262317337" observedRunningTime="2026-01-26 16:04:46.833394128 +0000 UTC m=+1844.615274521" watchObservedRunningTime="2026-01-26 16:04:46.850979064 +0000 UTC m=+1844.632859467" Jan 26 16:04:47 crc kubenswrapper[4896]: I0126 16:04:47.734189 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="68bd2b73-99a8-427d-a4bf-2648d7580be8" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.135:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:04:47 crc kubenswrapper[4896]: I0126 16:04:47.817664 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerStarted","Data":"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde"} Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.781044 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.834410 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.834470 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerDied","Data":"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5"} Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.834598 4896 scope.go:117] "RemoveContainer" containerID="c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.834294 4896 generic.go:334] "Generic (PLEG): container finished" podID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerID="985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5" exitCode=0 Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.835357 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d33ecd2-b8e9-4054-bad6-3607aef406b1","Type":"ContainerDied","Data":"ddf5954ba77bc75fed11ed3e917241a05d91db0e031e44dabb73aab15f6c8938"} Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.917365 4896 scope.go:117] "RemoveContainer" containerID="0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.927765 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.927832 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.927885 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928015 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928097 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928355 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928630 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928709 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.928826 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6p96\" (UniqueName: \"kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96\") pod \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\" (UID: \"4d33ecd2-b8e9-4054-bad6-3607aef406b1\") " Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.930562 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.930724 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d33ecd2-b8e9-4054-bad6-3607aef406b1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.936254 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96" (OuterVolumeSpecName: "kube-api-access-d6p96") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "kube-api-access-d6p96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.936824 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts" (OuterVolumeSpecName: "scripts") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:48 crc kubenswrapper[4896]: I0126 16:04:48.962285 4896 scope.go:117] "RemoveContainer" containerID="985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.076960 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.077047 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6p96\" (UniqueName: \"kubernetes.io/projected/4d33ecd2-b8e9-4054-bad6-3607aef406b1-kube-api-access-d6p96\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.096955 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.097156 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.179487 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.179527 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.195386 4896 scope.go:117] "RemoveContainer" containerID="e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.219972 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data" (OuterVolumeSpecName: "config-data") pod "4d33ecd2-b8e9-4054-bad6-3607aef406b1" (UID: "4d33ecd2-b8e9-4054-bad6-3607aef406b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.256726 4896 scope.go:117] "RemoveContainer" containerID="c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.258001 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f\": container with ID starting with c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f not found: ID does not exist" containerID="c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.258335 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f"} err="failed to get container status \"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f\": rpc error: code = NotFound desc = could not find container \"c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f\": container with ID starting with c9aae71af0b4670247947b3bc588ded394e8ec41e439eeff32139a2007ff867f not found: ID does not exist" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.258392 4896 scope.go:117] "RemoveContainer" containerID="0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.259137 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef\": container with ID starting with 0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef not found: ID does not exist" containerID="0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.259191 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef"} err="failed to get container status \"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef\": rpc error: code = NotFound desc = could not find container \"0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef\": container with ID starting with 0ffcdc89a5f31d76f116cc1a8f49131dae23c500c0776d0dfe5bf1877964c3ef not found: ID does not exist" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.259228 4896 scope.go:117] "RemoveContainer" containerID="985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.259608 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5\": container with ID starting with 985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5 not found: ID does not exist" containerID="985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.259659 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5"} err="failed to get container status \"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5\": rpc error: code = NotFound desc = could not find container \"985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5\": container with ID starting with 985994cba9955bcdf72154f8bae1420db541d5fbbfb0cc192d8b9a2981b3d9d5 not found: ID does not exist" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.259699 4896 scope.go:117] "RemoveContainer" containerID="e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.260667 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d\": container with ID starting with e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d not found: ID does not exist" containerID="e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.260713 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d"} err="failed to get container status \"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d\": rpc error: code = NotFound desc = could not find container \"e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d\": container with ID starting with e6d6241cf230ae3e238099fde99bdc549dd0a0d3e5c1361c92d2bfb3b8c6024d not found: ID does not exist" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.282298 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d33ecd2-b8e9-4054-bad6-3607aef406b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.489261 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.505134 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.518037 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.519157 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-central-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519176 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-central-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.519194 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="proxy-httpd" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519200 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="proxy-httpd" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.519215 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-notification-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519223 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-notification-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: E0126 16:04:49.519242 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="sg-core" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519248 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="sg-core" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519477 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-central-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519496 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="ceilometer-notification-agent" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519503 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="sg-core" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.519523 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" containerName="proxy-httpd" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.532788 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.545336 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.548972 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.549537 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.551650 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.597893 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.598231 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.598832 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.599006 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.599193 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9mk\" (UniqueName: \"kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.599311 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.599620 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.599998 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.705655 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.705794 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.705979 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.706140 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.706778 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.706864 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.706980 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9mk\" (UniqueName: \"kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.707062 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.708062 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.708454 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.715853 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.717311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.718567 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.722268 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.731719 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.750372 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9mk\" (UniqueName: \"kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk\") pod \"ceilometer-0\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " pod="openstack/ceilometer-0" Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.852955 4896 generic.go:334] "Generic (PLEG): container finished" podID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerID="d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde" exitCode=0 Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.853005 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerDied","Data":"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde"} Jan 26 16:04:49 crc kubenswrapper[4896]: I0126 16:04:49.885135 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:04:50 crc kubenswrapper[4896]: I0126 16:04:50.780057 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d33ecd2-b8e9-4054-bad6-3607aef406b1" path="/var/lib/kubelet/pods/4d33ecd2-b8e9-4054-bad6-3607aef406b1/volumes" Jan 26 16:04:50 crc kubenswrapper[4896]: W0126 16:04:50.809008 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b313e6b_6e1d_4a06_8456_7a8701820af9.slice/crio-075bd4e639f7236ed21076526d28cfa5db71869b1d1dda0ff68b9c15e82c8247 WatchSource:0}: Error finding container 075bd4e639f7236ed21076526d28cfa5db71869b1d1dda0ff68b9c15e82c8247: Status 404 returned error can't find the container with id 075bd4e639f7236ed21076526d28cfa5db71869b1d1dda0ff68b9c15e82c8247 Jan 26 16:04:50 crc kubenswrapper[4896]: I0126 16:04:50.823015 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:04:50 crc kubenswrapper[4896]: I0126 16:04:50.887134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerStarted","Data":"075bd4e639f7236ed21076526d28cfa5db71869b1d1dda0ff68b9c15e82c8247"} Jan 26 16:04:51 crc kubenswrapper[4896]: I0126 16:04:51.920492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerStarted","Data":"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b"} Jan 26 16:04:51 crc kubenswrapper[4896]: I0126 16:04:51.927281 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerStarted","Data":"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6"} Jan 26 16:04:51 crc kubenswrapper[4896]: I0126 16:04:51.962719 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nwzx" podStartSLOduration=4.251565885 podStartE2EDuration="7.962667822s" podCreationTimestamp="2026-01-26 16:04:44 +0000 UTC" firstStartedPulling="2026-01-26 16:04:46.805047439 +0000 UTC m=+1844.586927832" lastFinishedPulling="2026-01-26 16:04:50.516149376 +0000 UTC m=+1848.298029769" observedRunningTime="2026-01-26 16:04:51.944355117 +0000 UTC m=+1849.726235510" watchObservedRunningTime="2026-01-26 16:04:51.962667822 +0000 UTC m=+1849.744548215" Jan 26 16:04:52 crc kubenswrapper[4896]: I0126 16:04:52.940619 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerStarted","Data":"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d"} Jan 26 16:04:53 crc kubenswrapper[4896]: I0126 16:04:53.953463 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerStarted","Data":"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8"} Jan 26 16:04:54 crc kubenswrapper[4896]: I0126 16:04:54.241830 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 16:04:54 crc kubenswrapper[4896]: I0126 16:04:54.717231 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:54 crc kubenswrapper[4896]: I0126 16:04:54.717309 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:04:55 crc kubenswrapper[4896]: I0126 16:04:55.771356 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9nwzx" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="registry-server" probeResult="failure" output=< Jan 26 16:04:55 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:04:55 crc kubenswrapper[4896]: > Jan 26 16:04:55 crc kubenswrapper[4896]: I0126 16:04:55.980825 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerStarted","Data":"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac"} Jan 26 16:04:55 crc kubenswrapper[4896]: I0126 16:04:55.981058 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:04:56 crc kubenswrapper[4896]: I0126 16:04:56.014423 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.014578636 podStartE2EDuration="7.014399032s" podCreationTimestamp="2026-01-26 16:04:49 +0000 UTC" firstStartedPulling="2026-01-26 16:04:50.815955015 +0000 UTC m=+1848.597835408" lastFinishedPulling="2026-01-26 16:04:54.815775411 +0000 UTC m=+1852.597655804" observedRunningTime="2026-01-26 16:04:56.006665336 +0000 UTC m=+1853.788545729" watchObservedRunningTime="2026-01-26 16:04:56.014399032 +0000 UTC m=+1853.796279445" Jan 26 16:05:00 crc kubenswrapper[4896]: I0126 16:05:00.760524 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:00 crc kubenswrapper[4896]: E0126 16:05:00.761377 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:05:04 crc kubenswrapper[4896]: I0126 16:05:04.783026 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:05:04 crc kubenswrapper[4896]: I0126 16:05:04.836078 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.219554 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.220217 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nwzx" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="registry-server" containerID="cri-o://b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b" gracePeriod=2 Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.877337 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.905114 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content\") pod \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.905571 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnjbd\" (UniqueName: \"kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd\") pod \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.905764 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities\") pod \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\" (UID: \"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b\") " Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.906381 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities" (OuterVolumeSpecName: "utilities") pod "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" (UID: "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.907778 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.913750 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd" (OuterVolumeSpecName: "kube-api-access-pnjbd") pod "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" (UID: "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b"). InnerVolumeSpecName "kube-api-access-pnjbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:07 crc kubenswrapper[4896]: I0126 16:05:07.955244 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" (UID: "a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.010505 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.010545 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnjbd\" (UniqueName: \"kubernetes.io/projected/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b-kube-api-access-pnjbd\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.113799 4896 generic.go:334] "Generic (PLEG): container finished" podID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerID="b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b" exitCode=0 Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.113857 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerDied","Data":"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b"} Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.113913 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nwzx" event={"ID":"a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b","Type":"ContainerDied","Data":"df2e3516857ca8f4f8f0a8dc4aa9b7736fa7198f2026e1260029fc42a4355cdb"} Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.113932 4896 scope.go:117] "RemoveContainer" containerID="b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.113872 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nwzx" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.142324 4896 scope.go:117] "RemoveContainer" containerID="d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.153880 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.167215 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9nwzx"] Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.170400 4896 scope.go:117] "RemoveContainer" containerID="8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.233617 4896 scope.go:117] "RemoveContainer" containerID="b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b" Jan 26 16:05:08 crc kubenswrapper[4896]: E0126 16:05:08.233979 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b\": container with ID starting with b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b not found: ID does not exist" containerID="b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.234010 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b"} err="failed to get container status \"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b\": rpc error: code = NotFound desc = could not find container \"b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b\": container with ID starting with b39d6d7f97fbca763f51b1d5d6b94d592b40dd57464524b77346924054cc1e4b not found: ID does not exist" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.234035 4896 scope.go:117] "RemoveContainer" containerID="d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde" Jan 26 16:05:08 crc kubenswrapper[4896]: E0126 16:05:08.234323 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde\": container with ID starting with d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde not found: ID does not exist" containerID="d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.234348 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde"} err="failed to get container status \"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde\": rpc error: code = NotFound desc = could not find container \"d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde\": container with ID starting with d651a9fea7b99bd4760aa1520577f7945c9937f9854e2e3e6514a2f71d396bde not found: ID does not exist" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.234368 4896 scope.go:117] "RemoveContainer" containerID="8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9" Jan 26 16:05:08 crc kubenswrapper[4896]: E0126 16:05:08.234681 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9\": container with ID starting with 8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9 not found: ID does not exist" containerID="8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.234728 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9"} err="failed to get container status \"8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9\": rpc error: code = NotFound desc = could not find container \"8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9\": container with ID starting with 8718d8b9f09c39ff219bdec6300d92cc066237bf71b0f16a4c960e18c07f0bf9 not found: ID does not exist" Jan 26 16:05:08 crc kubenswrapper[4896]: I0126 16:05:08.776525 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" path="/var/lib/kubelet/pods/a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b/volumes" Jan 26 16:05:12 crc kubenswrapper[4896]: I0126 16:05:12.768670 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:12 crc kubenswrapper[4896]: E0126 16:05:12.769728 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:05:19 crc kubenswrapper[4896]: I0126 16:05:19.897600 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:05:23 crc kubenswrapper[4896]: I0126 16:05:23.759969 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:23 crc kubenswrapper[4896]: E0126 16:05:23.761163 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.712792 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-7pf4j"] Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.724690 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-7pf4j"] Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.773632 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f4e140-bab4-479c-a97b-4a5aa49a47d3" path="/var/lib/kubelet/pods/c8f4e140-bab4-479c-a97b-4a5aa49a47d3/volumes" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.829353 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-csgwp"] Jan 26 16:05:32 crc kubenswrapper[4896]: E0126 16:05:32.830049 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="extract-utilities" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.830078 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="extract-utilities" Jan 26 16:05:32 crc kubenswrapper[4896]: E0126 16:05:32.830108 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="registry-server" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.830118 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="registry-server" Jan 26 16:05:32 crc kubenswrapper[4896]: E0126 16:05:32.830157 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="extract-content" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.830166 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="extract-content" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.830483 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78ca5ca-6359-4e2f-ae2e-f71ca31bea8b" containerName="registry-server" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.831608 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.846707 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-csgwp"] Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.870597 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.870693 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnr4g\" (UniqueName: \"kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.870814 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.973136 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.973246 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnr4g\" (UniqueName: \"kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.973360 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.980899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.981746 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:32 crc kubenswrapper[4896]: I0126 16:05:32.996167 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnr4g\" (UniqueName: \"kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g\") pod \"heat-db-sync-csgwp\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:33 crc kubenswrapper[4896]: I0126 16:05:33.159571 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-csgwp" Jan 26 16:05:33 crc kubenswrapper[4896]: I0126 16:05:33.720008 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-csgwp"] Jan 26 16:05:34 crc kubenswrapper[4896]: I0126 16:05:34.456647 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-csgwp" event={"ID":"609ce882-8e94-4cbc-badf-fed5a521ec43","Type":"ContainerStarted","Data":"824ae15c9cf01adcfd864fbfab3fd78b7697ae068fc601516255bf1a99f2434d"} Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.272646 4896 scope.go:117] "RemoveContainer" containerID="0b32630aaf2a4014472660407750908211d58c8b460a9e07dc6fea5a137366d0" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.299047 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.299372 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-central-agent" containerID="cri-o://5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6" gracePeriod=30 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.299537 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="proxy-httpd" containerID="cri-o://d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac" gracePeriod=30 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.299613 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="sg-core" containerID="cri-o://42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8" gracePeriod=30 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.299657 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-notification-agent" containerID="cri-o://545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d" gracePeriod=30 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.412429 4896 scope.go:117] "RemoveContainer" containerID="29313532ea10dde256af2bec146c3f27f0789492e5d43c296daef00aab9a35b4" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.523912 4896 generic.go:334] "Generic (PLEG): container finished" podID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerID="d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac" exitCode=0 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.523943 4896 generic.go:334] "Generic (PLEG): container finished" podID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerID="42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8" exitCode=2 Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.523988 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerDied","Data":"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac"} Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.524017 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerDied","Data":"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8"} Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.556927 4896 scope.go:117] "RemoveContainer" containerID="4cd37daf9a5a1965357d245701cf624e9134ff8893e0a6b7002eec15c9a6b4d3" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.648471 4896 scope.go:117] "RemoveContainer" containerID="5d9f5da787d2d37ae3c5f239757b1421a756565fff5329c584d58e3ced4308ec" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.707733 4896 scope.go:117] "RemoveContainer" containerID="92ab08f9f5e29b99ff4cd3c9fc48f23917b0e8ac72b8024ce91682624c9ae42a" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.748120 4896 scope.go:117] "RemoveContainer" containerID="6109b1ca1be84dd1a7ed064477ed1c7e0979801626481bc6d588547a167765ae" Jan 26 16:05:35 crc kubenswrapper[4896]: I0126 16:05:35.763530 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:05:36 crc kubenswrapper[4896]: I0126 16:05:36.560262 4896 generic.go:334] "Generic (PLEG): container finished" podID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerID="5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6" exitCode=0 Jan 26 16:05:36 crc kubenswrapper[4896]: I0126 16:05:36.560460 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerDied","Data":"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6"} Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.158996 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.593395 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.667607 4896 generic.go:334] "Generic (PLEG): container finished" podID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerID="545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d" exitCode=0 Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.667663 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerDied","Data":"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d"} Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.667693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b313e6b-6e1d-4a06-8456-7a8701820af9","Type":"ContainerDied","Data":"075bd4e639f7236ed21076526d28cfa5db71869b1d1dda0ff68b9c15e82c8247"} Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.667713 4896 scope.go:117] "RemoveContainer" containerID="d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.667873 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.756782 4896 scope.go:117] "RemoveContainer" containerID="42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.762315 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:37 crc kubenswrapper[4896]: E0126 16:05:37.762554 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.787864 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.787936 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.787989 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.788043 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.788070 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.788098 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.788134 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.788219 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk9mk\" (UniqueName: \"kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk\") pod \"0b313e6b-6e1d-4a06-8456-7a8701820af9\" (UID: \"0b313e6b-6e1d-4a06-8456-7a8701820af9\") " Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.789965 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.794069 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.798417 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk" (OuterVolumeSpecName: "kube-api-access-tk9mk") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "kube-api-access-tk9mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.800328 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts" (OuterVolumeSpecName: "scripts") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.821903 4896 scope.go:117] "RemoveContainer" containerID="545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.841754 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.892738 4896 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.892778 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.892791 4896 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.892802 4896 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b313e6b-6e1d-4a06-8456-7a8701820af9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.892817 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk9mk\" (UniqueName: \"kubernetes.io/projected/0b313e6b-6e1d-4a06-8456-7a8701820af9-kube-api-access-tk9mk\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.960743 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:37 crc kubenswrapper[4896]: I0126 16:05:37.994850 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.006823 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data" (OuterVolumeSpecName: "config-data") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.075450 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b313e6b-6e1d-4a06-8456-7a8701820af9" (UID: "0b313e6b-6e1d-4a06-8456-7a8701820af9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.097568 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.097628 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b313e6b-6e1d-4a06-8456-7a8701820af9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.115052 4896 scope.go:117] "RemoveContainer" containerID="5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.154562 4896 scope.go:117] "RemoveContainer" containerID="d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.156558 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac\": container with ID starting with d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac not found: ID does not exist" containerID="d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.156645 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac"} err="failed to get container status \"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac\": rpc error: code = NotFound desc = could not find container \"d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac\": container with ID starting with d93c5be52ef0e8972c75f7c431ebc35d82f276268eff238a2b41cc32df9f4cac not found: ID does not exist" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.156687 4896 scope.go:117] "RemoveContainer" containerID="42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.157553 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8\": container with ID starting with 42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8 not found: ID does not exist" containerID="42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.157635 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8"} err="failed to get container status \"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8\": rpc error: code = NotFound desc = could not find container \"42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8\": container with ID starting with 42035912fe4a7e322ab0ef7c1e320de1ae62d75c39d9d4f99563f3091175c0d8 not found: ID does not exist" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.157678 4896 scope.go:117] "RemoveContainer" containerID="545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.158295 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d\": container with ID starting with 545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d not found: ID does not exist" containerID="545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.158332 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d"} err="failed to get container status \"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d\": rpc error: code = NotFound desc = could not find container \"545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d\": container with ID starting with 545f0e95ed94f1d44bfbdfd8a34575d0203f1449b5468b1e5da6a22c98119e5d not found: ID does not exist" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.158354 4896 scope.go:117] "RemoveContainer" containerID="5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.158996 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6\": container with ID starting with 5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6 not found: ID does not exist" containerID="5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.159052 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6"} err="failed to get container status \"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6\": rpc error: code = NotFound desc = could not find container \"5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6\": container with ID starting with 5cf83d3fb2065f17f4f1e1d165cea7d22d56f45d60e2ca9ef4f38272f8daf9e6 not found: ID does not exist" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.320381 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.350688 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.369612 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.370219 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-central-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370238 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-central-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.370256 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="proxy-httpd" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370262 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="proxy-httpd" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.370279 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-notification-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370287 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-notification-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: E0126 16:05:38.370308 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="sg-core" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370313 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="sg-core" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370557 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="sg-core" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370605 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-central-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370615 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="ceilometer-notification-agent" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.370630 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" containerName="proxy-httpd" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.373066 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.380760 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.381116 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.381307 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.389001 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.512806 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-run-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.512910 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-log-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.512973 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2tt\" (UniqueName: \"kubernetes.io/projected/f20afffa-3480-40b7-a7b8-116bccafaffb-kube-api-access-8q2tt\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.513018 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.513113 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-config-data\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.513230 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.513272 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.513341 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-scripts\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.615741 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-run-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.615840 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-log-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.615877 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2tt\" (UniqueName: \"kubernetes.io/projected/f20afffa-3480-40b7-a7b8-116bccafaffb-kube-api-access-8q2tt\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.615906 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.615990 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-config-data\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.616068 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.616108 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.616157 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-scripts\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.616488 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-log-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.617232 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f20afffa-3480-40b7-a7b8-116bccafaffb-run-httpd\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.622520 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-scripts\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.622609 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.623000 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-config-data\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.624195 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.639255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f20afffa-3480-40b7-a7b8-116bccafaffb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.640648 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2tt\" (UniqueName: \"kubernetes.io/projected/f20afffa-3480-40b7-a7b8-116bccafaffb-kube-api-access-8q2tt\") pod \"ceilometer-0\" (UID: \"f20afffa-3480-40b7-a7b8-116bccafaffb\") " pod="openstack/ceilometer-0" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.781144 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b313e6b-6e1d-4a06-8456-7a8701820af9" path="/var/lib/kubelet/pods/0b313e6b-6e1d-4a06-8456-7a8701820af9/volumes" Jan 26 16:05:38 crc kubenswrapper[4896]: I0126 16:05:38.787023 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 16:05:39 crc kubenswrapper[4896]: I0126 16:05:39.591307 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 16:05:39 crc kubenswrapper[4896]: I0126 16:05:39.697134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"399dd747da116a706f5fb2878d888e567f3bd65daa7032fe4a170194e2df2cc0"} Jan 26 16:05:41 crc kubenswrapper[4896]: I0126 16:05:41.737555 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" containerID="cri-o://9009b8e5135e69cd5846269082a241e82ed68550fd9d820c2d5ee3c5dc4197f6" gracePeriod=604795 Jan 26 16:05:43 crc kubenswrapper[4896]: I0126 16:05:43.278538 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" containerID="cri-o://88b89da700a3a44ad0f898c495f3baa0bd9e98b34b88753a421b9955801f7582" gracePeriod=604794 Jan 26 16:05:47 crc kubenswrapper[4896]: I0126 16:05:47.971918 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 26 16:05:48 crc kubenswrapper[4896]: I0126 16:05:48.105560 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 26 16:05:48 crc kubenswrapper[4896]: I0126 16:05:48.760374 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:48 crc kubenswrapper[4896]: E0126 16:05:48.760999 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:05:50 crc kubenswrapper[4896]: I0126 16:05:50.108881 4896 generic.go:334] "Generic (PLEG): container finished" podID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerID="9009b8e5135e69cd5846269082a241e82ed68550fd9d820c2d5ee3c5dc4197f6" exitCode=0 Jan 26 16:05:50 crc kubenswrapper[4896]: I0126 16:05:50.109335 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerDied","Data":"9009b8e5135e69cd5846269082a241e82ed68550fd9d820c2d5ee3c5dc4197f6"} Jan 26 16:05:50 crc kubenswrapper[4896]: I0126 16:05:50.113117 4896 generic.go:334] "Generic (PLEG): container finished" podID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerID="88b89da700a3a44ad0f898c495f3baa0bd9e98b34b88753a421b9955801f7582" exitCode=0 Jan 26 16:05:50 crc kubenswrapper[4896]: I0126 16:05:50.113143 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerDied","Data":"88b89da700a3a44ad0f898c495f3baa0bd9e98b34b88753a421b9955801f7582"} Jan 26 16:05:53 crc kubenswrapper[4896]: I0126 16:05:53.953728 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:05:53 crc kubenswrapper[4896]: I0126 16:05:53.958348 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:53 crc kubenswrapper[4896]: I0126 16:05:53.963014 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 16:05:53 crc kubenswrapper[4896]: I0126 16:05:53.988906 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.054216 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.054318 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.054824 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.055028 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.055115 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfjl\" (UniqueName: \"kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.055179 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.055320 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.157824 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.157949 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.158042 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.158145 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.158209 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.158234 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmfjl\" (UniqueName: \"kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.158252 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.159237 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.160605 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.160721 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.160939 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.162291 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.164290 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.181805 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmfjl\" (UniqueName: \"kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl\") pod \"dnsmasq-dns-7d84b4d45c-w28sf\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:54 crc kubenswrapper[4896]: I0126 16:05:54.288768 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:05:59 crc kubenswrapper[4896]: I0126 16:05:59.760449 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:05:59 crc kubenswrapper[4896]: E0126 16:05:59.761373 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.629415 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13f72f8-afaf-4e0f-b76b-342e5391579c","Type":"ContainerDied","Data":"825acbd6b7339e0980cab6e0ec051ef5abc137cf0ba62fadf0496601291cf316"} Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.630038 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="825acbd6b7339e0980cab6e0ec051ef5abc137cf0ba62fadf0496601291cf316" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.632997 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"dc8f497b-3dfe-4cfc-aac0-34145dd221ed","Type":"ContainerDied","Data":"d5cf626f61879ba9da34e714a6d2567663f05b5e6f47a9b2d9de92f8b0d6de41"} Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.633039 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5cf626f61879ba9da34e714a6d2567663f05b5e6f47a9b2d9de92f8b0d6de41" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.680115 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.685054 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.851594 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.851674 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852128 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852164 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852219 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852241 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852274 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852317 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852339 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852384 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852404 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852464 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852509 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852555 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852591 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chz8\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852638 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852701 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852732 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852754 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd\") pod \"a13f72f8-afaf-4e0f-b76b-342e5391579c\" (UID: \"a13f72f8-afaf-4e0f-b76b-342e5391579c\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.852984 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.853057 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvcb6\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.853077 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data\") pod \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\" (UID: \"dc8f497b-3dfe-4cfc-aac0-34145dd221ed\") " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.856591 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.858794 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.861159 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.862010 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.862482 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.864175 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.864742 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.865723 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.869052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.874080 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info" (OuterVolumeSpecName: "pod-info") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.875673 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info" (OuterVolumeSpecName: "pod-info") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.884532 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6" (OuterVolumeSpecName: "kube-api-access-lvcb6") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "kube-api-access-lvcb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.885013 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.914903 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8" (OuterVolumeSpecName: "kube-api-access-6chz8") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "kube-api-access-6chz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.951940 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0" (OuterVolumeSpecName: "persistence") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "pvc-b528b92f-3514-47a3-bc55-900ec41388e0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956490 4896 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956523 4896 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13f72f8-afaf-4e0f-b76b-342e5391579c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956533 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvcb6\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-kube-api-access-lvcb6\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956545 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956583 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") on node \"crc\" " Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956615 4896 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956630 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956641 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956654 4896 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956665 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956675 4896 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956685 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956697 4896 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13f72f8-afaf-4e0f-b76b-342e5391579c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956707 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chz8\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-kube-api-access-6chz8\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.956717 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.963785 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef" (OuterVolumeSpecName: "persistence") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "pvc-5d26b942-826b-4618-a675-4a54d25047ef". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.994768 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data" (OuterVolumeSpecName: "config-data") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:02 crc kubenswrapper[4896]: I0126 16:06:02.994935 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf" (OuterVolumeSpecName: "server-conf") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.008622 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.008963 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b528b92f-3514-47a3-bc55-900ec41388e0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0") on node "crc" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.040754 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data" (OuterVolumeSpecName: "config-data") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.069152 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf" (OuterVolumeSpecName: "server-conf") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080292 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080336 4896 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080350 4896 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080361 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13f72f8-afaf-4e0f-b76b-342e5391579c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080392 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") on node \"crc\" " Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.080420 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.084604 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dc8f497b-3dfe-4cfc-aac0-34145dd221ed" (UID: "dc8f497b-3dfe-4cfc-aac0-34145dd221ed"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.137806 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.138032 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5d26b942-826b-4618-a675-4a54d25047ef" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef") on node "crc" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.187897 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.187944 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc8f497b-3dfe-4cfc-aac0-34145dd221ed-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.197539 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a13f72f8-afaf-4e0f-b76b-342e5391579c" (UID: "a13f72f8-afaf-4e0f-b76b-342e5391579c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.221891 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.226562 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.290035 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13f72f8-afaf-4e0f-b76b-342e5391579c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.709615 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.709948 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.800564 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.800739 4896 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.800899 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnr4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-csgwp_openstack(609ce882-8e94-4cbc-badf-fed5a521ec43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.802333 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-csgwp" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.828765 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.852883 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.881156 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.901076 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.928611 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.930258 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="setup-container" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930283 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="setup-container" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.930313 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="setup-container" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930320 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="setup-container" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.930347 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930352 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: E0126 16:06:03.930368 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930373 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930689 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.930722 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" containerName="rabbitmq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.933225 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938125 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-66zhq" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938417 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938568 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938710 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938857 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.938979 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.944982 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.945276 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.954579 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.957562 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:06:03 crc kubenswrapper[4896]: I0126 16:06:03.976697 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.100624 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.108921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109113 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109202 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109292 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109337 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109371 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3cb4dd6a-0deb-4730-8b5d-590b8981433b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109394 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109472 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109509 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109632 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3cb4dd6a-0deb-4730-8b5d-590b8981433b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109688 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpf89\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-kube-api-access-mpf89\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109805 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109873 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.109938 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110072 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjd9\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-kube-api-access-fgjd9\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110155 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-config-data\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110191 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110235 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110269 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.110300 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212812 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212872 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212899 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3cb4dd6a-0deb-4730-8b5d-590b8981433b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212923 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212957 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.212973 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213001 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213028 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3cb4dd6a-0deb-4730-8b5d-590b8981433b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213052 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpf89\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-kube-api-access-mpf89\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213129 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213161 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213185 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213238 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjd9\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-kube-api-access-fgjd9\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-config-data\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213290 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213313 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213328 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213347 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213382 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213400 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213438 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213471 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.213981 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.217183 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-config-data\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.217455 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.217963 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.217982 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.218009 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c49109239722dabce33dccb586276ae914b08fb55f7760e929dd269f2f12d4c/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.218280 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.218676 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.222243 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.222923 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.223057 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3cb4dd6a-0deb-4730-8b5d-590b8981433b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.223951 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.225109 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.225311 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ad73c85ed4d62ca0cdc37f989da140da4f75d3f6db1d6e7dac21fa29c2e2b14/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.228716 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.228907 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.230822 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3cb4dd6a-0deb-4730-8b5d-590b8981433b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.231389 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.232823 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.234123 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.237086 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.237334 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3cb4dd6a-0deb-4730-8b5d-590b8981433b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.241697 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjd9\" (UniqueName: \"kubernetes.io/projected/3cb4dd6a-0deb-4730-8b5d-590b8981433b-kube-api-access-fgjd9\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.245849 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpf89\" (UniqueName: \"kubernetes.io/projected/0141a12a-f7a3-47cc-b0ac-7853a684fcf8-kube-api-access-mpf89\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.297703 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b528b92f-3514-47a3-bc55-900ec41388e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b528b92f-3514-47a3-bc55-900ec41388e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"0141a12a-f7a3-47cc-b0ac-7853a684fcf8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.318015 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d26b942-826b-4618-a675-4a54d25047ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d26b942-826b-4618-a675-4a54d25047ef\") pod \"rabbitmq-server-2\" (UID: \"3cb4dd6a-0deb-4730-8b5d-590b8981433b\") " pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.560380 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.620149 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 16:06:04 crc kubenswrapper[4896]: E0126 16:06:04.742919 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-csgwp" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.995957 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13f72f8-afaf-4e0f-b76b-342e5391579c" path="/var/lib/kubelet/pods/a13f72f8-afaf-4e0f-b76b-342e5391579c/volumes" Jan 26 16:06:04 crc kubenswrapper[4896]: I0126 16:06:04.998795 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8f497b-3dfe-4cfc-aac0-34145dd221ed" path="/var/lib/kubelet/pods/dc8f497b-3dfe-4cfc-aac0-34145dd221ed/volumes" Jan 26 16:06:06 crc kubenswrapper[4896]: E0126 16:06:06.811259 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 26 16:06:06 crc kubenswrapper[4896]: E0126 16:06:06.811531 4896 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 26 16:06:06 crc kubenswrapper[4896]: E0126 16:06:06.811704 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n96h6dh686h549h5d4h5dch65bh595hcfhcbh58ch5b9h674h65dh659h5b9h686h669h54dh547h7ch57bh66bh5dbhc5h58fh95hbfhf5h5bh5d8hb4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8q2tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f20afffa-3480-40b7-a7b8-116bccafaffb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.305843 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.326795 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.343167 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 16:06:07 crc kubenswrapper[4896]: W0126 16:06:07.347104 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0141a12a_f7a3_47cc_b0ac_7853a684fcf8.slice/crio-f06589c3bbd6fb6bf30064ecb977250d7a170e5d261dfafb8304500c02c3b8be WatchSource:0}: Error finding container f06589c3bbd6fb6bf30064ecb977250d7a170e5d261dfafb8304500c02c3b8be: Status 404 returned error can't find the container with id f06589c3bbd6fb6bf30064ecb977250d7a170e5d261dfafb8304500c02c3b8be Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.790092 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0141a12a-f7a3-47cc-b0ac-7853a684fcf8","Type":"ContainerStarted","Data":"f06589c3bbd6fb6bf30064ecb977250d7a170e5d261dfafb8304500c02c3b8be"} Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.791522 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" event={"ID":"c0aa3260-9957-45db-ac06-736f6302647a","Type":"ContainerStarted","Data":"5a66a89cb2fa2a3098ca64d7a1488def9b366b89b5c5912ad7e6fe5fd1fd1808"} Jan 26 16:06:07 crc kubenswrapper[4896]: I0126 16:06:07.792730 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3cb4dd6a-0deb-4730-8b5d-590b8981433b","Type":"ContainerStarted","Data":"cedcbf361c1e8ae1106c195070b927e58432306853702dcfead56f7be518ee9f"} Jan 26 16:06:08 crc kubenswrapper[4896]: I0126 16:06:08.818002 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"349be2413e7f0d68601918f4d6aa997ebc96784d1aeae20691ef56a6b5d85077"} Jan 26 16:06:08 crc kubenswrapper[4896]: I0126 16:06:08.820325 4896 generic.go:334] "Generic (PLEG): container finished" podID="c0aa3260-9957-45db-ac06-736f6302647a" containerID="e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8" exitCode=0 Jan 26 16:06:08 crc kubenswrapper[4896]: I0126 16:06:08.820377 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" event={"ID":"c0aa3260-9957-45db-ac06-736f6302647a","Type":"ContainerDied","Data":"e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8"} Jan 26 16:06:09 crc kubenswrapper[4896]: I0126 16:06:09.834737 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" event={"ID":"c0aa3260-9957-45db-ac06-736f6302647a","Type":"ContainerStarted","Data":"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be"} Jan 26 16:06:09 crc kubenswrapper[4896]: I0126 16:06:09.835382 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:06:09 crc kubenswrapper[4896]: I0126 16:06:09.837940 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"1e2f30e33e34333431235ead592827b411de409af57ec644804d0fe3623aad90"} Jan 26 16:06:09 crc kubenswrapper[4896]: I0126 16:06:09.887164 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" podStartSLOduration=16.887125814 podStartE2EDuration="16.887125814s" podCreationTimestamp="2026-01-26 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:06:09.876693932 +0000 UTC m=+1927.658574335" watchObservedRunningTime="2026-01-26 16:06:09.887125814 +0000 UTC m=+1927.669006207" Jan 26 16:06:10 crc kubenswrapper[4896]: E0126 16:06:10.824171 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" Jan 26 16:06:10 crc kubenswrapper[4896]: I0126 16:06:10.854358 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"78f9fb2afeffc2a5b258380bb1e4a200cc01cab88d01b51ee958e3423597fdfc"} Jan 26 16:06:10 crc kubenswrapper[4896]: E0126 16:06:10.856795 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" Jan 26 16:06:11 crc kubenswrapper[4896]: I0126 16:06:11.866693 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 16:06:11 crc kubenswrapper[4896]: E0126 16:06:11.869446 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" Jan 26 16:06:12 crc kubenswrapper[4896]: E0126 16:06:12.881465 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" Jan 26 16:06:13 crc kubenswrapper[4896]: I0126 16:06:13.814639 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:06:13 crc kubenswrapper[4896]: E0126 16:06:13.816705 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.293034 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.378147 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.378438 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="dnsmasq-dns" containerID="cri-o://4e435feb5abeabe0e42b0a60b5a78820c39981e6abcfa10378dc622af9029cc6" gracePeriod=10 Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.587047 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-p48h9"] Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.590309 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.607721 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-p48h9"] Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642266 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhcf\" (UniqueName: \"kubernetes.io/projected/50fc9d14-ddc2-4347-a52c-498b02787bb7-kube-api-access-qbhcf\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642344 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-config\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642403 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642471 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642655 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642689 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.642743 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.747999 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhcf\" (UniqueName: \"kubernetes.io/projected/50fc9d14-ddc2-4347-a52c-498b02787bb7-kube-api-access-qbhcf\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748072 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-config\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748115 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748165 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748265 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748287 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.748332 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.749431 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.749617 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.752555 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.752629 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.753206 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-config\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.754827 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50fc9d14-ddc2-4347-a52c-498b02787bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.770565 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhcf\" (UniqueName: \"kubernetes.io/projected/50fc9d14-ddc2-4347-a52c-498b02787bb7-kube-api-access-qbhcf\") pod \"dnsmasq-dns-6f6df4f56c-p48h9\" (UID: \"50fc9d14-ddc2-4347-a52c-498b02787bb7\") " pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.903901 4896 generic.go:334] "Generic (PLEG): container finished" podID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerID="4e435feb5abeabe0e42b0a60b5a78820c39981e6abcfa10378dc622af9029cc6" exitCode=0 Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.903946 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" event={"ID":"fcb644b1-07e5-4b54-9431-96c251d6875b","Type":"ContainerDied","Data":"4e435feb5abeabe0e42b0a60b5a78820c39981e6abcfa10378dc622af9029cc6"} Jan 26 16:06:14 crc kubenswrapper[4896]: I0126 16:06:14.919407 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.315328 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369113 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369263 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369329 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pvnz\" (UniqueName: \"kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369434 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369774 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.369817 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb\") pod \"fcb644b1-07e5-4b54-9431-96c251d6875b\" (UID: \"fcb644b1-07e5-4b54-9431-96c251d6875b\") " Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.378805 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz" (OuterVolumeSpecName: "kube-api-access-4pvnz") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "kube-api-access-4pvnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.447747 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.472901 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.472950 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pvnz\" (UniqueName: \"kubernetes.io/projected/fcb644b1-07e5-4b54-9431-96c251d6875b-kube-api-access-4pvnz\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.509037 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.510047 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.531200 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.540211 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config" (OuterVolumeSpecName: "config") pod "fcb644b1-07e5-4b54-9431-96c251d6875b" (UID: "fcb644b1-07e5-4b54-9431-96c251d6875b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.576168 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.576210 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.576223 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.576235 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcb644b1-07e5-4b54-9431-96c251d6875b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.775465 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-p48h9"] Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.917673 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" event={"ID":"50fc9d14-ddc2-4347-a52c-498b02787bb7","Type":"ContainerStarted","Data":"22314a8dc4722d363ba73a505cbfaf47b77194824e0ef344d4c8d8239b80b2f7"} Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.924865 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" event={"ID":"fcb644b1-07e5-4b54-9431-96c251d6875b","Type":"ContainerDied","Data":"07d17b4484d68f813e482457c90d9ce99c8e0fc3584378a02122c819ad017e4e"} Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.924939 4896 scope.go:117] "RemoveContainer" containerID="4e435feb5abeabe0e42b0a60b5a78820c39981e6abcfa10378dc622af9029cc6" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.925151 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.968438 4896 scope.go:117] "RemoveContainer" containerID="feb30a12f2b43889b02ee9c2446cb23dc185d627214d3d73b8c43a6e16b617f2" Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.974364 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:06:15 crc kubenswrapper[4896]: I0126 16:06:15.999751 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-n9zsh"] Jan 26 16:06:16 crc kubenswrapper[4896]: I0126 16:06:16.930372 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" path="/var/lib/kubelet/pods/fcb644b1-07e5-4b54-9431-96c251d6875b/volumes" Jan 26 16:06:16 crc kubenswrapper[4896]: I0126 16:06:16.948317 4896 generic.go:334] "Generic (PLEG): container finished" podID="50fc9d14-ddc2-4347-a52c-498b02787bb7" containerID="37008ca4eb8836f4bef191409d66857d6477a0b24bba76bda7ba4cdd6c332f61" exitCode=0 Jan 26 16:06:16 crc kubenswrapper[4896]: I0126 16:06:16.948389 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" event={"ID":"50fc9d14-ddc2-4347-a52c-498b02787bb7","Type":"ContainerDied","Data":"37008ca4eb8836f4bef191409d66857d6477a0b24bba76bda7ba4cdd6c332f61"} Jan 26 16:06:16 crc kubenswrapper[4896]: I0126 16:06:16.959860 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0141a12a-f7a3-47cc-b0ac-7853a684fcf8","Type":"ContainerStarted","Data":"82de01889e9e129fb9ee6ff98546519ad851ffcd00a01fa16aa543f44b59dd49"} Jan 26 16:06:17 crc kubenswrapper[4896]: I0126 16:06:17.973003 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3cb4dd6a-0deb-4730-8b5d-590b8981433b","Type":"ContainerStarted","Data":"e3c978510f0fe5204be2b846a8b23158b05b2304d304facf327b895dbb742179"} Jan 26 16:06:17 crc kubenswrapper[4896]: I0126 16:06:17.975342 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" event={"ID":"50fc9d14-ddc2-4347-a52c-498b02787bb7","Type":"ContainerStarted","Data":"5b0fae4cf5daf4c7e52ac202fa0e607902860aab78b8eabbfa1ea6453cd98aeb"} Jan 26 16:06:18 crc kubenswrapper[4896]: I0126 16:06:18.025507 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" podStartSLOduration=4.025477139 podStartE2EDuration="4.025477139s" podCreationTimestamp="2026-01-26 16:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:06:18.013926329 +0000 UTC m=+1935.795806762" watchObservedRunningTime="2026-01-26 16:06:18.025477139 +0000 UTC m=+1935.807357522" Jan 26 16:06:18 crc kubenswrapper[4896]: I0126 16:06:18.987667 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:21 crc kubenswrapper[4896]: I0126 16:06:21.014327 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-csgwp" event={"ID":"609ce882-8e94-4cbc-badf-fed5a521ec43","Type":"ContainerStarted","Data":"df0242c0571a52e1118aa4947dc008443830b84c4f173c9e8a1f80f639212b1d"} Jan 26 16:06:21 crc kubenswrapper[4896]: I0126 16:06:21.038403 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-csgwp" podStartSLOduration=2.766081035 podStartE2EDuration="49.038382707s" podCreationTimestamp="2026-01-26 16:05:32 +0000 UTC" firstStartedPulling="2026-01-26 16:05:33.717396955 +0000 UTC m=+1891.499277348" lastFinishedPulling="2026-01-26 16:06:19.989698627 +0000 UTC m=+1937.771579020" observedRunningTime="2026-01-26 16:06:21.031946384 +0000 UTC m=+1938.813826807" watchObservedRunningTime="2026-01-26 16:06:21.038382707 +0000 UTC m=+1938.820263110" Jan 26 16:06:23 crc kubenswrapper[4896]: I0126 16:06:23.044931 4896 generic.go:334] "Generic (PLEG): container finished" podID="609ce882-8e94-4cbc-badf-fed5a521ec43" containerID="df0242c0571a52e1118aa4947dc008443830b84c4f173c9e8a1f80f639212b1d" exitCode=0 Jan 26 16:06:23 crc kubenswrapper[4896]: I0126 16:06:23.045352 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-csgwp" event={"ID":"609ce882-8e94-4cbc-badf-fed5a521ec43","Type":"ContainerDied","Data":"df0242c0571a52e1118aa4947dc008443830b84c4f173c9e8a1f80f639212b1d"} Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.520001 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-csgwp" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.630519 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data\") pod \"609ce882-8e94-4cbc-badf-fed5a521ec43\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.630884 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnr4g\" (UniqueName: \"kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g\") pod \"609ce882-8e94-4cbc-badf-fed5a521ec43\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.630918 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle\") pod \"609ce882-8e94-4cbc-badf-fed5a521ec43\" (UID: \"609ce882-8e94-4cbc-badf-fed5a521ec43\") " Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.638834 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g" (OuterVolumeSpecName: "kube-api-access-hnr4g") pod "609ce882-8e94-4cbc-badf-fed5a521ec43" (UID: "609ce882-8e94-4cbc-badf-fed5a521ec43"). InnerVolumeSpecName "kube-api-access-hnr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.671199 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "609ce882-8e94-4cbc-badf-fed5a521ec43" (UID: "609ce882-8e94-4cbc-badf-fed5a521ec43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.735882 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnr4g\" (UniqueName: \"kubernetes.io/projected/609ce882-8e94-4cbc-badf-fed5a521ec43-kube-api-access-hnr4g\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.735963 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.750554 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data" (OuterVolumeSpecName: "config-data") pod "609ce882-8e94-4cbc-badf-fed5a521ec43" (UID: "609ce882-8e94-4cbc-badf-fed5a521ec43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.778132 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.840963 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609ce882-8e94-4cbc-badf-fed5a521ec43-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:24 crc kubenswrapper[4896]: I0126 16:06:24.920778 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-p48h9" Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.246944 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-csgwp" event={"ID":"609ce882-8e94-4cbc-badf-fed5a521ec43","Type":"ContainerDied","Data":"824ae15c9cf01adcfd864fbfab3fd78b7697ae068fc601516255bf1a99f2434d"} Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.246981 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="824ae15c9cf01adcfd864fbfab3fd78b7697ae068fc601516255bf1a99f2434d" Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.247030 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-csgwp" Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.322252 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.323675 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="dnsmasq-dns" containerID="cri-o://debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be" gracePeriod=10 Jan 26 16:06:25 crc kubenswrapper[4896]: I0126 16:06:25.952045 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.133773 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134496 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134540 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134565 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134629 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmfjl\" (UniqueName: \"kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134672 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.134818 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam\") pod \"c0aa3260-9957-45db-ac06-736f6302647a\" (UID: \"c0aa3260-9957-45db-ac06-736f6302647a\") " Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.139261 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl" (OuterVolumeSpecName: "kube-api-access-wmfjl") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "kube-api-access-wmfjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.348142 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmfjl\" (UniqueName: \"kubernetes.io/projected/c0aa3260-9957-45db-ac06-736f6302647a-kube-api-access-wmfjl\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.373973 4896 generic.go:334] "Generic (PLEG): container finished" podID="c0aa3260-9957-45db-ac06-736f6302647a" containerID="debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be" exitCode=0 Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.374041 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" event={"ID":"c0aa3260-9957-45db-ac06-736f6302647a","Type":"ContainerDied","Data":"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be"} Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.374071 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" event={"ID":"c0aa3260-9957-45db-ac06-736f6302647a","Type":"ContainerDied","Data":"5a66a89cb2fa2a3098ca64d7a1488def9b366b89b5c5912ad7e6fe5fd1fd1808"} Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.374089 4896 scope.go:117] "RemoveContainer" containerID="debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.374118 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-w28sf" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.388174 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"37931cdbd03d2e22cec7d40fc16a7e397f4e40881a1429b12e3ff38a9e6b816c"} Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.438031 4896 scope.go:117] "RemoveContainer" containerID="e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.441776 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.449879 4896 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.454081 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.455169 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.823116576 podStartE2EDuration="48.455152142s" podCreationTimestamp="2026-01-26 16:05:38 +0000 UTC" firstStartedPulling="2026-01-26 16:05:39.612622083 +0000 UTC m=+1897.394502476" lastFinishedPulling="2026-01-26 16:06:25.244657649 +0000 UTC m=+1943.026538042" observedRunningTime="2026-01-26 16:06:26.424740218 +0000 UTC m=+1944.206620621" watchObservedRunningTime="2026-01-26 16:06:26.455152142 +0000 UTC m=+1944.237032525" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.458980 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.459485 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config" (OuterVolumeSpecName: "config") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.467030 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.483557 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0aa3260-9957-45db-ac06-736f6302647a" (UID: "c0aa3260-9957-45db-ac06-736f6302647a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.550863 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.550891 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.550958 4896 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.550993 4896 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.551006 4896 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0aa3260-9957-45db-ac06-736f6302647a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.602110 4896 scope.go:117] "RemoveContainer" containerID="debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.602499 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be\": container with ID starting with debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be not found: ID does not exist" containerID="debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.602540 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be"} err="failed to get container status \"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be\": rpc error: code = NotFound desc = could not find container \"debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be\": container with ID starting with debf43ea5bff1fcab66b45c2cd9eeee8e1ccbf7c8f0e0c86b147c86d0b6330be not found: ID does not exist" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.602568 4896 scope.go:117] "RemoveContainer" containerID="e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.603021 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8\": container with ID starting with e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8 not found: ID does not exist" containerID="e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.603075 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8"} err="failed to get container status \"e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8\": rpc error: code = NotFound desc = could not find container \"e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8\": container with ID starting with e3f0ba3c593cdb0ffae79b9991282a087ec9b0b0e9d39a721cf67782c80bdfb8 not found: ID does not exist" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.684693 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-677c6d4d55-s7fl2"] Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.685423 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="init" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.685452 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="init" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.685471 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.685481 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.685512 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.685533 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.685556 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="init" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.685567 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="init" Jan 26 16:06:26 crc kubenswrapper[4896]: E0126 16:06:26.685616 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" containerName="heat-db-sync" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.685627 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" containerName="heat-db-sync" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.686014 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" containerName="heat-db-sync" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.686060 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb644b1-07e5-4b54-9431-96c251d6875b" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.686077 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0aa3260-9957-45db-ac06-736f6302647a" containerName="dnsmasq-dns" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.687274 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.704383 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c6d4d55-s7fl2"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.847935 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.860661 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-w28sf"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.863648 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data-custom\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.863798 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.863900 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-combined-ca-bundle\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.864048 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q268\" (UniqueName: \"kubernetes.io/projected/783f20d6-aa0a-4ecf-9dad-d33991c40591-kube-api-access-9q268\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.873591 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59b9f8644d-s5wzv"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.876027 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.885644 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59b9f8644d-s5wzv"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.907644 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5dfbd8bb4f-kl472"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.909803 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.935244 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dfbd8bb4f-kl472"] Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.966462 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q268\" (UniqueName: \"kubernetes.io/projected/783f20d6-aa0a-4ecf-9dad-d33991c40591-kube-api-access-9q268\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.967237 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data-custom\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.967446 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.968231 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-combined-ca-bundle\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.972502 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data-custom\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.973028 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-config-data\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.980672 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783f20d6-aa0a-4ecf-9dad-d33991c40591-combined-ca-bundle\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:26 crc kubenswrapper[4896]: I0126 16:06:26.988269 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q268\" (UniqueName: \"kubernetes.io/projected/783f20d6-aa0a-4ecf-9dad-d33991c40591-kube-api-access-9q268\") pod \"heat-engine-677c6d4d55-s7fl2\" (UID: \"783f20d6-aa0a-4ecf-9dad-d33991c40591\") " pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.024024 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.070769 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-public-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.071133 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.071423 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcvg\" (UniqueName: \"kubernetes.io/projected/6184235f-aaa1-47cc-bc3b-a0a30698cc01-kube-api-access-fmcvg\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.071550 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-internal-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.071680 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-internal-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.071906 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data-custom\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072088 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-public-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072418 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghd9b\" (UniqueName: \"kubernetes.io/projected/ec3b067e-a59e-43db-b8f6-435fc273b976-kube-api-access-ghd9b\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072512 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-combined-ca-bundle\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072674 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-combined-ca-bundle\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072784 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data-custom\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.072891 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.174886 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.180982 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-public-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181133 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181164 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcvg\" (UniqueName: \"kubernetes.io/projected/6184235f-aaa1-47cc-bc3b-a0a30698cc01-kube-api-access-fmcvg\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181198 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-internal-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181237 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-internal-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181397 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data-custom\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181431 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-public-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181514 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd9b\" (UniqueName: \"kubernetes.io/projected/ec3b067e-a59e-43db-b8f6-435fc273b976-kube-api-access-ghd9b\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181532 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-combined-ca-bundle\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181693 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-combined-ca-bundle\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.181738 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data-custom\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.183476 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.186987 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data-custom\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.188329 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-config-data\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.192785 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-internal-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.194444 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-combined-ca-bundle\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.199315 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-public-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.199358 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-internal-tls-certs\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.200685 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-public-tls-certs\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.202678 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3b067e-a59e-43db-b8f6-435fc273b976-config-data-custom\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.203357 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6184235f-aaa1-47cc-bc3b-a0a30698cc01-combined-ca-bundle\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.204280 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcvg\" (UniqueName: \"kubernetes.io/projected/6184235f-aaa1-47cc-bc3b-a0a30698cc01-kube-api-access-fmcvg\") pod \"heat-cfnapi-5dfbd8bb4f-kl472\" (UID: \"6184235f-aaa1-47cc-bc3b-a0a30698cc01\") " pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.207333 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd9b\" (UniqueName: \"kubernetes.io/projected/ec3b067e-a59e-43db-b8f6-435fc273b976-kube-api-access-ghd9b\") pod \"heat-api-59b9f8644d-s5wzv\" (UID: \"ec3b067e-a59e-43db-b8f6-435fc273b976\") " pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.210565 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.251411 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:27 crc kubenswrapper[4896]: W0126 16:06:27.867333 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod783f20d6_aa0a_4ecf_9dad_d33991c40591.slice/crio-ed4df83b832181806fe3a5e1f2fc43ce29e03f5ee31da192a2caf2013ca7da5a WatchSource:0}: Error finding container ed4df83b832181806fe3a5e1f2fc43ce29e03f5ee31da192a2caf2013ca7da5a: Status 404 returned error can't find the container with id ed4df83b832181806fe3a5e1f2fc43ce29e03f5ee31da192a2caf2013ca7da5a Jan 26 16:06:27 crc kubenswrapper[4896]: I0126 16:06:27.874484 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c6d4d55-s7fl2"] Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.065763 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59b9f8644d-s5wzv"] Jan 26 16:06:28 crc kubenswrapper[4896]: W0126 16:06:28.271928 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6184235f_aaa1_47cc_bc3b_a0a30698cc01.slice/crio-11f20d750bb3ddb2ec3e5475adae184d3cf3a5ed804cba08029bca75b101e56e WatchSource:0}: Error finding container 11f20d750bb3ddb2ec3e5475adae184d3cf3a5ed804cba08029bca75b101e56e: Status 404 returned error can't find the container with id 11f20d750bb3ddb2ec3e5475adae184d3cf3a5ed804cba08029bca75b101e56e Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.274797 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dfbd8bb4f-kl472"] Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.715468 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" event={"ID":"6184235f-aaa1-47cc-bc3b-a0a30698cc01","Type":"ContainerStarted","Data":"11f20d750bb3ddb2ec3e5475adae184d3cf3a5ed804cba08029bca75b101e56e"} Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.718628 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c6d4d55-s7fl2" event={"ID":"783f20d6-aa0a-4ecf-9dad-d33991c40591","Type":"ContainerStarted","Data":"8df8d50dbe33c48afdc4203b550dfeb876b6dfc98f007461f9dcb5bff76ed8a2"} Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.718745 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c6d4d55-s7fl2" event={"ID":"783f20d6-aa0a-4ecf-9dad-d33991c40591","Type":"ContainerStarted","Data":"ed4df83b832181806fe3a5e1f2fc43ce29e03f5ee31da192a2caf2013ca7da5a"} Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.719351 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.736559 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59b9f8644d-s5wzv" event={"ID":"ec3b067e-a59e-43db-b8f6-435fc273b976","Type":"ContainerStarted","Data":"bca0f9ce250d99c4224715483b267039b1063408fd21a4495a1a5e2accf0a003"} Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.765526 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-677c6d4d55-s7fl2" podStartSLOduration=2.765506369 podStartE2EDuration="2.765506369s" podCreationTimestamp="2026-01-26 16:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:06:28.746878231 +0000 UTC m=+1946.528758634" watchObservedRunningTime="2026-01-26 16:06:28.765506369 +0000 UTC m=+1946.547386762" Jan 26 16:06:28 crc kubenswrapper[4896]: I0126 16:06:28.798798 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0aa3260-9957-45db-ac06-736f6302647a" path="/var/lib/kubelet/pods/c0aa3260-9957-45db-ac06-736f6302647a/volumes" Jan 26 16:06:29 crc kubenswrapper[4896]: I0126 16:06:29.761760 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:06:29 crc kubenswrapper[4896]: E0126 16:06:29.762627 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.782517 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" event={"ID":"6184235f-aaa1-47cc-bc3b-a0a30698cc01","Type":"ContainerStarted","Data":"7e30aa4354a5c82f10f5255a7d67c1d1b131ca0007965cdf35235db1ab3c0db4"} Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.783208 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.785000 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59b9f8644d-s5wzv" event={"ID":"ec3b067e-a59e-43db-b8f6-435fc273b976","Type":"ContainerStarted","Data":"089faab332704c8d37821417fc0e428cd7ca5bbe8a27f8d05c65e985e044124a"} Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.785236 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.810260 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" podStartSLOduration=2.875206451 podStartE2EDuration="5.810237858s" podCreationTimestamp="2026-01-26 16:06:26 +0000 UTC" firstStartedPulling="2026-01-26 16:06:28.275040348 +0000 UTC m=+1946.056920741" lastFinishedPulling="2026-01-26 16:06:31.210071755 +0000 UTC m=+1948.991952148" observedRunningTime="2026-01-26 16:06:31.801888018 +0000 UTC m=+1949.583768411" watchObservedRunningTime="2026-01-26 16:06:31.810237858 +0000 UTC m=+1949.592118251" Jan 26 16:06:31 crc kubenswrapper[4896]: I0126 16:06:31.840085 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59b9f8644d-s5wzv" podStartSLOduration=2.706498148 podStartE2EDuration="5.840064097s" podCreationTimestamp="2026-01-26 16:06:26 +0000 UTC" firstStartedPulling="2026-01-26 16:06:28.074406503 +0000 UTC m=+1945.856286896" lastFinishedPulling="2026-01-26 16:06:31.207972451 +0000 UTC m=+1948.989852845" observedRunningTime="2026-01-26 16:06:31.817631774 +0000 UTC m=+1949.599512177" watchObservedRunningTime="2026-01-26 16:06:31.840064097 +0000 UTC m=+1949.621944480" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.255612 4896 scope.go:117] "RemoveContainer" containerID="457fd0f2ef23512434f704c800ba49efe9480908c29d2de0a9af1d9178f01f2d" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.320253 4896 scope.go:117] "RemoveContainer" containerID="38d6290f1f3c67badf8db1ac0706222e68bff33ff81d99ce0da725a82647b9ff" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.363313 4896 scope.go:117] "RemoveContainer" containerID="9009b8e5135e69cd5846269082a241e82ed68550fd9d820c2d5ee3c5dc4197f6" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.442479 4896 scope.go:117] "RemoveContainer" containerID="8ee4ce832a158d9875bc4598e8a7f21961800b70d3c724cc7128fbb12e1524fe" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.516963 4896 scope.go:117] "RemoveContainer" containerID="5285cbca490498b0756067eefe09f89d22391cf62f12a87dd3c066307f0e869f" Jan 26 16:06:36 crc kubenswrapper[4896]: I0126 16:06:36.549495 4896 scope.go:117] "RemoveContainer" containerID="88b89da700a3a44ad0f898c495f3baa0bd9e98b34b88753a421b9955801f7582" Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.678172 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59b9f8644d-s5wzv" Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.736755 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5dfbd8bb4f-kl472" Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.800054 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.800336 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7756b77457-pgv8j" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" containerName="heat-api" containerID="cri-o://68f2e4b14c6a8445b80489d249bbbdd906fc152724b924165dff4e66aaf13944" gracePeriod=60 Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.838355 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:06:38 crc kubenswrapper[4896]: I0126 16:06:38.838600 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerName="heat-cfnapi" containerID="cri-o://f236bb30a02f8b23734b4228c03d6b1b400dfa9431f3e94e0491aedcc5c83b9d" gracePeriod=60 Jan 26 16:06:41 crc kubenswrapper[4896]: I0126 16:06:41.999275 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7756b77457-pgv8j" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.227:8004/healthcheck\": read tcp 10.217.0.2:45966->10.217.0.227:8004: read: connection reset by peer" Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.328258 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.228:8000/healthcheck\": read tcp 10.217.0.2:45480->10.217.0.228:8000: read: connection reset by peer" Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.950633 4896 generic.go:334] "Generic (PLEG): container finished" podID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerID="f236bb30a02f8b23734b4228c03d6b1b400dfa9431f3e94e0491aedcc5c83b9d" exitCode=0 Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.950957 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" event={"ID":"8e3ce827-294a-4320-9545-c01e6aa46bbb","Type":"ContainerDied","Data":"f236bb30a02f8b23734b4228c03d6b1b400dfa9431f3e94e0491aedcc5c83b9d"} Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.958097 4896 generic.go:334] "Generic (PLEG): container finished" podID="19034db0-0d0e-4a65-b91d-890180a924f2" containerID="68f2e4b14c6a8445b80489d249bbbdd906fc152724b924165dff4e66aaf13944" exitCode=0 Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.958147 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756b77457-pgv8j" event={"ID":"19034db0-0d0e-4a65-b91d-890180a924f2","Type":"ContainerDied","Data":"68f2e4b14c6a8445b80489d249bbbdd906fc152724b924165dff4e66aaf13944"} Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.958185 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756b77457-pgv8j" event={"ID":"19034db0-0d0e-4a65-b91d-890180a924f2","Type":"ContainerDied","Data":"4cb8a3ffb414880fb18c9c4659b87c32a9ef3ccbcbbfebcc67edf0e956808f4e"} Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.958207 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb8a3ffb414880fb18c9c4659b87c32a9ef3ccbcbbfebcc67edf0e956808f4e" Jan 26 16:06:42 crc kubenswrapper[4896]: I0126 16:06:42.958345 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.064249 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.064350 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.064432 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.064490 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fgml\" (UniqueName: \"kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.066087 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.066253 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom\") pod \"19034db0-0d0e-4a65-b91d-890180a924f2\" (UID: \"19034db0-0d0e-4a65-b91d-890180a924f2\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.074930 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.087118 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml" (OuterVolumeSpecName: "kube-api-access-4fgml") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "kube-api-access-4fgml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.114014 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.172624 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.172664 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fgml\" (UniqueName: \"kubernetes.io/projected/19034db0-0d0e-4a65-b91d-890180a924f2-kube-api-access-4fgml\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.172676 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.194023 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data" (OuterVolumeSpecName: "config-data") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.208545 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.282727 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.283172 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.311621 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "19034db0-0d0e-4a65-b91d-890180a924f2" (UID: "19034db0-0d0e-4a65-b91d-890180a924f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.386443 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19034db0-0d0e-4a65-b91d-890180a924f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.476082 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.488982 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqtd9\" (UniqueName: \"kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.489076 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.489173 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.489336 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.489598 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.489682 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.615420 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9" (OuterVolumeSpecName: "kube-api-access-xqtd9") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "kube-api-access-xqtd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.624216 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.714997 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.715028 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqtd9\" (UniqueName: \"kubernetes.io/projected/8e3ce827-294a-4320-9545-c01e6aa46bbb-kube-api-access-xqtd9\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.721539 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.724727 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: E0126 16:06:43.750014 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data podName:8e3ce827-294a-4320-9545-c01e6aa46bbb nodeName:}" failed. No retries permitted until 2026-01-26 16:06:44.249962912 +0000 UTC m=+1962.031843305 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb") : error deleting /var/lib/kubelet/pods/8e3ce827-294a-4320-9545-c01e6aa46bbb/volume-subpaths: remove /var/lib/kubelet/pods/8e3ce827-294a-4320-9545-c01e6aa46bbb/volume-subpaths: no such file or directory Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.752603 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.819450 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.819490 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.819501 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.970794 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756b77457-pgv8j" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.970848 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.970873 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f77b8c468-mpb4b" event={"ID":"8e3ce827-294a-4320-9545-c01e6aa46bbb","Type":"ContainerDied","Data":"7cc60ad36b9e58b0eaaea234c69270176f5fb7da8a8f200aae7631dac5db3544"} Jan 26 16:06:43 crc kubenswrapper[4896]: I0126 16:06:43.970945 4896 scope.go:117] "RemoveContainer" containerID="f236bb30a02f8b23734b4228c03d6b1b400dfa9431f3e94e0491aedcc5c83b9d" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.018693 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.029408 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7756b77457-pgv8j"] Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.331770 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") pod \"8e3ce827-294a-4320-9545-c01e6aa46bbb\" (UID: \"8e3ce827-294a-4320-9545-c01e6aa46bbb\") " Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.335080 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data" (OuterVolumeSpecName: "config-data") pod "8e3ce827-294a-4320-9545-c01e6aa46bbb" (UID: "8e3ce827-294a-4320-9545-c01e6aa46bbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.436855 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3ce827-294a-4320-9545-c01e6aa46bbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.759641 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:06:44 crc kubenswrapper[4896]: E0126 16:06:44.759932 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.776757 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" path="/var/lib/kubelet/pods/19034db0-0d0e-4a65-b91d-890180a924f2/volumes" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.804719 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.830626 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f77b8c468-mpb4b"] Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.847848 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx"] Jan 26 16:06:44 crc kubenswrapper[4896]: E0126 16:06:44.848327 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" containerName="heat-api" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.848340 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" containerName="heat-api" Jan 26 16:06:44 crc kubenswrapper[4896]: E0126 16:06:44.848378 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerName="heat-cfnapi" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.848384 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerName="heat-cfnapi" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.848623 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" containerName="heat-cfnapi" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.848642 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="19034db0-0d0e-4a65-b91d-890180a924f2" containerName="heat-api" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.849486 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.852007 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.852096 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.858101 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.859012 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.859994 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx"] Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.942078 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.942338 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.942531 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:44 crc kubenswrapper[4896]: I0126 16:06:44.942634 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4bz\" (UniqueName: \"kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.045454 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.045602 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4bz\" (UniqueName: \"kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.045705 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.045739 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.050612 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.052311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.055154 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.062405 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4bz\" (UniqueName: \"kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.217702 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.916536 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx"] Jan 26 16:06:45 crc kubenswrapper[4896]: I0126 16:06:45.997684 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" event={"ID":"131acfa1-5305-42d1-9c00-6f0193f795a8","Type":"ContainerStarted","Data":"483a7992621f246adba891071f636f0a81107fce4d9b4fc35c368f1f7fdfefb9"} Jan 26 16:06:46 crc kubenswrapper[4896]: I0126 16:06:46.776993 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3ce827-294a-4320-9545-c01e6aa46bbb" path="/var/lib/kubelet/pods/8e3ce827-294a-4320-9545-c01e6aa46bbb/volumes" Jan 26 16:06:47 crc kubenswrapper[4896]: I0126 16:06:47.066323 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-677c6d4d55-s7fl2" Jan 26 16:06:47 crc kubenswrapper[4896]: I0126 16:06:47.141784 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:06:47 crc kubenswrapper[4896]: I0126 16:06:47.142038 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-665cc7757b-8rh2l" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerName="heat-engine" containerID="cri-o://c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" gracePeriod=60 Jan 26 16:06:49 crc kubenswrapper[4896]: I0126 16:06:49.057090 4896 generic.go:334] "Generic (PLEG): container finished" podID="0141a12a-f7a3-47cc-b0ac-7853a684fcf8" containerID="82de01889e9e129fb9ee6ff98546519ad851ffcd00a01fa16aa543f44b59dd49" exitCode=0 Jan 26 16:06:49 crc kubenswrapper[4896]: I0126 16:06:49.057138 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0141a12a-f7a3-47cc-b0ac-7853a684fcf8","Type":"ContainerDied","Data":"82de01889e9e129fb9ee6ff98546519ad851ffcd00a01fa16aa543f44b59dd49"} Jan 26 16:06:50 crc kubenswrapper[4896]: I0126 16:06:50.093864 4896 generic.go:334] "Generic (PLEG): container finished" podID="3cb4dd6a-0deb-4730-8b5d-590b8981433b" containerID="e3c978510f0fe5204be2b846a8b23158b05b2304d304facf327b895dbb742179" exitCode=0 Jan 26 16:06:50 crc kubenswrapper[4896]: I0126 16:06:50.094397 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3cb4dd6a-0deb-4730-8b5d-590b8981433b","Type":"ContainerDied","Data":"e3c978510f0fe5204be2b846a8b23158b05b2304d304facf327b895dbb742179"} Jan 26 16:06:50 crc kubenswrapper[4896]: I0126 16:06:50.104486 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0141a12a-f7a3-47cc-b0ac-7853a684fcf8","Type":"ContainerStarted","Data":"a7166e6a83a27ba3ced809f91de0a9ec613a49224c724c2a00ee839be8af8ef6"} Jan 26 16:06:50 crc kubenswrapper[4896]: I0126 16:06:50.104844 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:06:50 crc kubenswrapper[4896]: I0126 16:06:50.173024 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.172995693 podStartE2EDuration="47.172995693s" podCreationTimestamp="2026-01-26 16:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:06:50.167393083 +0000 UTC m=+1967.949273486" watchObservedRunningTime="2026-01-26 16:06:50.172995693 +0000 UTC m=+1967.954876086" Jan 26 16:06:51 crc kubenswrapper[4896]: I0126 16:06:51.121431 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"3cb4dd6a-0deb-4730-8b5d-590b8981433b","Type":"ContainerStarted","Data":"82fb49dd8893b387b0871d0cb756e59e4d24f0263460e494f579025292e636f0"} Jan 26 16:06:51 crc kubenswrapper[4896]: I0126 16:06:51.122125 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 26 16:06:51 crc kubenswrapper[4896]: I0126 16:06:51.153926 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.153902454 podStartE2EDuration="48.153902454s" podCreationTimestamp="2026-01-26 16:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:06:51.146312823 +0000 UTC m=+1968.928193226" watchObservedRunningTime="2026-01-26 16:06:51.153902454 +0000 UTC m=+1968.935782847" Jan 26 16:06:52 crc kubenswrapper[4896]: E0126 16:06:52.187706 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:06:52 crc kubenswrapper[4896]: E0126 16:06:52.236073 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:06:52 crc kubenswrapper[4896]: E0126 16:06:52.240216 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 16:06:52 crc kubenswrapper[4896]: E0126 16:06:52.240342 4896 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-665cc7757b-8rh2l" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerName="heat-engine" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.231892 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-68g8k"] Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.261078 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-68g8k"] Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.325223 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-n4vfl"] Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.326910 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.332843 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.350905 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-n4vfl"] Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.477302 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h692\" (UniqueName: \"kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.477364 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.477471 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.477542 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.579893 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.580075 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.580477 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h692\" (UniqueName: \"kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.580541 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.587570 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.589113 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.589333 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.604024 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h692\" (UniqueName: \"kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692\") pod \"aodh-db-sync-n4vfl\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:53 crc kubenswrapper[4896]: I0126 16:06:53.673889 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:06:54 crc kubenswrapper[4896]: I0126 16:06:54.780720 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce438fc7-3483-42e1-9230-ff161324b2a8" path="/var/lib/kubelet/pods/ce438fc7-3483-42e1-9230-ff161324b2a8/volumes" Jan 26 16:06:56 crc kubenswrapper[4896]: I0126 16:06:56.759009 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:06:56 crc kubenswrapper[4896]: E0126 16:06:56.759779 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:06:59 crc kubenswrapper[4896]: E0126 16:06:59.839331 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68cc8d9e_4d5a_4d26_8985_2fcbefbfb839.slice/crio-conmon-c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.338160 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.369815 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom\") pod \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.370272 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data\") pod \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.370320 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgnf\" (UniqueName: \"kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf\") pod \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.370507 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle\") pod \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\" (UID: \"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839\") " Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.382274 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" (UID: "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.396774 4896 generic.go:334] "Generic (PLEG): container finished" podID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" exitCode=0 Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.396819 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-665cc7757b-8rh2l" event={"ID":"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839","Type":"ContainerDied","Data":"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab"} Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.396821 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-665cc7757b-8rh2l" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.396859 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-665cc7757b-8rh2l" event={"ID":"68cc8d9e-4d5a-4d26-8985-2fcbefbfb839","Type":"ContainerDied","Data":"df06ef4b33d8a665d6eb1053ab3def221f28387c7990dc01363c510cf2a070c0"} Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.396876 4896 scope.go:117] "RemoveContainer" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.415086 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" (UID: "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.441248 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf" (OuterVolumeSpecName: "kube-api-access-hdgnf") pod "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" (UID: "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839"). InnerVolumeSpecName "kube-api-access-hdgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.449876 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data" (OuterVolumeSpecName: "config-data") pod "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" (UID: "68cc8d9e-4d5a-4d26-8985-2fcbefbfb839"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.456930 4896 scope.go:117] "RemoveContainer" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" Jan 26 16:07:00 crc kubenswrapper[4896]: E0126 16:07:00.457447 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab\": container with ID starting with c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab not found: ID does not exist" containerID="c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.457478 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab"} err="failed to get container status \"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab\": rpc error: code = NotFound desc = could not find container \"c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab\": container with ID starting with c373043052e99793deeeb10225193eb1c010444d8674c953e738e4e0326e22ab not found: ID does not exist" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.478515 4896 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.478560 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.478608 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdgnf\" (UniqueName: \"kubernetes.io/projected/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-kube-api-access-hdgnf\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.478627 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.546945 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-n4vfl"] Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.732388 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.745892 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-665cc7757b-8rh2l"] Jan 26 16:07:00 crc kubenswrapper[4896]: I0126 16:07:00.776959 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" path="/var/lib/kubelet/pods/68cc8d9e-4d5a-4d26-8985-2fcbefbfb839/volumes" Jan 26 16:07:01 crc kubenswrapper[4896]: I0126 16:07:01.410923 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-n4vfl" event={"ID":"9f8bfeeb-3335-42ef-8d6b-42a35ec463df","Type":"ContainerStarted","Data":"fca8d6bd69f756b2afa9f5692f2e6af85c592fbf1f6d4a94ccd2b2b80f79e4d9"} Jan 26 16:07:01 crc kubenswrapper[4896]: I0126 16:07:01.415932 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" event={"ID":"131acfa1-5305-42d1-9c00-6f0193f795a8","Type":"ContainerStarted","Data":"4970e1d458d6607f961a0d29ffcfff9a11b4deee66c3e5d902f46b0825d9af3c"} Jan 26 16:07:01 crc kubenswrapper[4896]: I0126 16:07:01.444353 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" podStartSLOduration=3.329416422 podStartE2EDuration="17.444332042s" podCreationTimestamp="2026-01-26 16:06:44 +0000 UTC" firstStartedPulling="2026-01-26 16:06:45.923367222 +0000 UTC m=+1963.705247615" lastFinishedPulling="2026-01-26 16:07:00.038282842 +0000 UTC m=+1977.820163235" observedRunningTime="2026-01-26 16:07:01.434829215 +0000 UTC m=+1979.216709608" watchObservedRunningTime="2026-01-26 16:07:01.444332042 +0000 UTC m=+1979.226212435" Jan 26 16:07:04 crc kubenswrapper[4896]: I0126 16:07:04.564796 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 16:07:04 crc kubenswrapper[4896]: I0126 16:07:04.623731 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="3cb4dd6a-0deb-4730-8b5d-590b8981433b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.21:5671: connect: connection refused" Jan 26 16:07:06 crc kubenswrapper[4896]: I0126 16:07:06.665829 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 16:07:07 crc kubenswrapper[4896]: I0126 16:07:07.547394 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-n4vfl" event={"ID":"9f8bfeeb-3335-42ef-8d6b-42a35ec463df","Type":"ContainerStarted","Data":"e9da0176df8af3ef3c279e5d8979759ac705154689126369592e469aa3474092"} Jan 26 16:07:08 crc kubenswrapper[4896]: I0126 16:07:08.760441 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:07:08 crc kubenswrapper[4896]: E0126 16:07:08.760798 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:07:09 crc kubenswrapper[4896]: I0126 16:07:09.582678 4896 generic.go:334] "Generic (PLEG): container finished" podID="9f8bfeeb-3335-42ef-8d6b-42a35ec463df" containerID="e9da0176df8af3ef3c279e5d8979759ac705154689126369592e469aa3474092" exitCode=0 Jan 26 16:07:09 crc kubenswrapper[4896]: I0126 16:07:09.582762 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-n4vfl" event={"ID":"9f8bfeeb-3335-42ef-8d6b-42a35ec463df","Type":"ContainerDied","Data":"e9da0176df8af3ef3c279e5d8979759ac705154689126369592e469aa3474092"} Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.098597 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.161092 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle\") pod \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.161140 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h692\" (UniqueName: \"kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692\") pod \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.161173 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts\") pod \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.161529 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data\") pod \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\" (UID: \"9f8bfeeb-3335-42ef-8d6b-42a35ec463df\") " Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.182171 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692" (OuterVolumeSpecName: "kube-api-access-7h692") pod "9f8bfeeb-3335-42ef-8d6b-42a35ec463df" (UID: "9f8bfeeb-3335-42ef-8d6b-42a35ec463df"). InnerVolumeSpecName "kube-api-access-7h692". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.183749 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts" (OuterVolumeSpecName: "scripts") pod "9f8bfeeb-3335-42ef-8d6b-42a35ec463df" (UID: "9f8bfeeb-3335-42ef-8d6b-42a35ec463df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.197825 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f8bfeeb-3335-42ef-8d6b-42a35ec463df" (UID: "9f8bfeeb-3335-42ef-8d6b-42a35ec463df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.205796 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data" (OuterVolumeSpecName: "config-data") pod "9f8bfeeb-3335-42ef-8d6b-42a35ec463df" (UID: "9f8bfeeb-3335-42ef-8d6b-42a35ec463df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.265435 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.265479 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h692\" (UniqueName: \"kubernetes.io/projected/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-kube-api-access-7h692\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.265493 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.265505 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bfeeb-3335-42ef-8d6b-42a35ec463df-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.609144 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-n4vfl" event={"ID":"9f8bfeeb-3335-42ef-8d6b-42a35ec463df","Type":"ContainerDied","Data":"fca8d6bd69f756b2afa9f5692f2e6af85c592fbf1f6d4a94ccd2b2b80f79e4d9"} Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.609514 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca8d6bd69f756b2afa9f5692f2e6af85c592fbf1f6d4a94ccd2b2b80f79e4d9" Jan 26 16:07:11 crc kubenswrapper[4896]: I0126 16:07:11.609196 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-n4vfl" Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.334399 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.334792 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-api" containerID="cri-o://46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" gracePeriod=30 Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.334901 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-listener" containerID="cri-o://f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" gracePeriod=30 Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.334952 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-evaluator" containerID="cri-o://bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" gracePeriod=30 Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.334911 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-notifier" containerID="cri-o://35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" gracePeriod=30 Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.636029 4896 generic.go:334] "Generic (PLEG): container finished" podID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerID="46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" exitCode=0 Jan 26 16:07:13 crc kubenswrapper[4896]: I0126 16:07:13.636097 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerDied","Data":"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496"} Jan 26 16:07:14 crc kubenswrapper[4896]: I0126 16:07:14.621821 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 26 16:07:14 crc kubenswrapper[4896]: I0126 16:07:14.649877 4896 generic.go:334] "Generic (PLEG): container finished" podID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerID="bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" exitCode=0 Jan 26 16:07:14 crc kubenswrapper[4896]: I0126 16:07:14.649926 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerDied","Data":"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3"} Jan 26 16:07:14 crc kubenswrapper[4896]: I0126 16:07:14.694496 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:17 crc kubenswrapper[4896]: I0126 16:07:17.685895 4896 generic.go:334] "Generic (PLEG): container finished" podID="131acfa1-5305-42d1-9c00-6f0193f795a8" containerID="4970e1d458d6607f961a0d29ffcfff9a11b4deee66c3e5d902f46b0825d9af3c" exitCode=0 Jan 26 16:07:17 crc kubenswrapper[4896]: I0126 16:07:17.686112 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" event={"ID":"131acfa1-5305-42d1-9c00-6f0193f795a8","Type":"ContainerDied","Data":"4970e1d458d6607f961a0d29ffcfff9a11b4deee66c3e5d902f46b0825d9af3c"} Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.513788 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.579191 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.579730 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.579928 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8xl2\" (UniqueName: \"kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.580050 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.580314 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.580401 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts\") pod \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\" (UID: \"bab7dd85-b4dc-45d8-ad5f-84dc75483edd\") " Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.586456 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2" (OuterVolumeSpecName: "kube-api-access-s8xl2") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "kube-api-access-s8xl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.602221 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts" (OuterVolumeSpecName: "scripts") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.679878 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.685480 4896 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.685525 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8xl2\" (UniqueName: \"kubernetes.io/projected/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-kube-api-access-s8xl2\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.685540 4896 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.690693 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.708455 4896 generic.go:334] "Generic (PLEG): container finished" podID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerID="f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" exitCode=0 Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.708486 4896 generic.go:334] "Generic (PLEG): container finished" podID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerID="35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" exitCode=0 Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.708695 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.709817 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerDied","Data":"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56"} Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.709859 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerDied","Data":"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6"} Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.709873 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bab7dd85-b4dc-45d8-ad5f-84dc75483edd","Type":"ContainerDied","Data":"878641bfbd0450da6cece2b14e4fd463a5aaa495ec06fcce268ac61dfb480f46"} Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.709890 4896 scope.go:117] "RemoveContainer" containerID="f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.784450 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data" (OuterVolumeSpecName: "config-data") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.786848 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.786872 4896 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.787693 4896 scope.go:117] "RemoveContainer" containerID="35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.806374 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bab7dd85-b4dc-45d8-ad5f-84dc75483edd" (UID: "bab7dd85-b4dc-45d8-ad5f-84dc75483edd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.821766 4896 scope.go:117] "RemoveContainer" containerID="bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.851021 4896 scope.go:117] "RemoveContainer" containerID="46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.889020 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab7dd85-b4dc-45d8-ad5f-84dc75483edd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.889280 4896 scope.go:117] "RemoveContainer" containerID="f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" Jan 26 16:07:18 crc kubenswrapper[4896]: E0126 16:07:18.889902 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56\": container with ID starting with f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56 not found: ID does not exist" containerID="f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.889930 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56"} err="failed to get container status \"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56\": rpc error: code = NotFound desc = could not find container \"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56\": container with ID starting with f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.889953 4896 scope.go:117] "RemoveContainer" containerID="35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" Jan 26 16:07:18 crc kubenswrapper[4896]: E0126 16:07:18.890179 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6\": container with ID starting with 35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6 not found: ID does not exist" containerID="35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890197 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6"} err="failed to get container status \"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6\": rpc error: code = NotFound desc = could not find container \"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6\": container with ID starting with 35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890210 4896 scope.go:117] "RemoveContainer" containerID="bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" Jan 26 16:07:18 crc kubenswrapper[4896]: E0126 16:07:18.890413 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3\": container with ID starting with bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3 not found: ID does not exist" containerID="bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890433 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3"} err="failed to get container status \"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3\": rpc error: code = NotFound desc = could not find container \"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3\": container with ID starting with bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890447 4896 scope.go:117] "RemoveContainer" containerID="46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" Jan 26 16:07:18 crc kubenswrapper[4896]: E0126 16:07:18.890745 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496\": container with ID starting with 46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496 not found: ID does not exist" containerID="46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890764 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496"} err="failed to get container status \"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496\": rpc error: code = NotFound desc = could not find container \"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496\": container with ID starting with 46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.890776 4896 scope.go:117] "RemoveContainer" containerID="f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.893277 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56"} err="failed to get container status \"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56\": rpc error: code = NotFound desc = could not find container \"f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56\": container with ID starting with f1b46c315967a4b2b8fa6bc438deefe9d0944cb30bef215257191c904e13da56 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.893331 4896 scope.go:117] "RemoveContainer" containerID="35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.893733 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6"} err="failed to get container status \"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6\": rpc error: code = NotFound desc = could not find container \"35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6\": container with ID starting with 35ef6561eff1bbf24e7491cd1d2a51530c77e7f8e856aa1a7c80e2ef74a601c6 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.893759 4896 scope.go:117] "RemoveContainer" containerID="bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.894066 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3"} err="failed to get container status \"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3\": rpc error: code = NotFound desc = could not find container \"bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3\": container with ID starting with bffd03ef6b8e3a7c39400945c3f4275c73981767223f4f0e00bb0c7a0e2b48f3 not found: ID does not exist" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.894087 4896 scope.go:117] "RemoveContainer" containerID="46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496" Jan 26 16:07:18 crc kubenswrapper[4896]: I0126 16:07:18.894455 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496"} err="failed to get container status \"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496\": rpc error: code = NotFound desc = could not find container \"46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496\": container with ID starting with 46b90698aeaad5ccfaa3052b1a2a922007d3825f08e01c9b031585bd13139496 not found: ID does not exist" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.069663 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.078536 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.124929 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125532 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-evaluator" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125561 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-evaluator" Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125600 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-notifier" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125611 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-notifier" Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125650 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-listener" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125657 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-listener" Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125675 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerName="heat-engine" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125681 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerName="heat-engine" Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125700 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8bfeeb-3335-42ef-8d6b-42a35ec463df" containerName="aodh-db-sync" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125706 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8bfeeb-3335-42ef-8d6b-42a35ec463df" containerName="aodh-db-sync" Jan 26 16:07:19 crc kubenswrapper[4896]: E0126 16:07:19.125715 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-api" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125721 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-api" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125938 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8bfeeb-3335-42ef-8d6b-42a35ec463df" containerName="aodh-db-sync" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125963 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-api" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125972 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cc8d9e-4d5a-4d26-8985-2fcbefbfb839" containerName="heat-engine" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125988 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-evaluator" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.125998 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-notifier" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.126009 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" containerName="aodh-listener" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.128959 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.136596 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.136813 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.137086 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.137321 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-b2ntx" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.137415 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.153437 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.217135 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65h8v\" (UniqueName: \"kubernetes.io/projected/8adb8fd6-0c45-4952-9f55-64937ba92998-kube-api-access-65h8v\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.217208 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-config-data\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.217297 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-scripts\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.217730 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.217930 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-internal-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.218114 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-public-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.320407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65h8v\" (UniqueName: \"kubernetes.io/projected/8adb8fd6-0c45-4952-9f55-64937ba92998-kube-api-access-65h8v\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.320694 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-config-data\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.320832 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-scripts\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.321117 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.321240 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-internal-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.321462 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-public-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.327089 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-config-data\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.327288 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-internal-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.327534 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-scripts\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.332129 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-public-tls-certs\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.332524 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb8fd6-0c45-4952-9f55-64937ba92998-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.346096 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65h8v\" (UniqueName: \"kubernetes.io/projected/8adb8fd6-0c45-4952-9f55-64937ba92998-kube-api-access-65h8v\") pod \"aodh-0\" (UID: \"8adb8fd6-0c45-4952-9f55-64937ba92998\") " pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.427393 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="rabbitmq" containerID="cri-o://28c56f09214c1a2ace92b8b5e3a030297230b701bbe3e07a9e099ecf52c7b1a2" gracePeriod=604796 Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.503880 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.604800 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.736245 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle\") pod \"131acfa1-5305-42d1-9c00-6f0193f795a8\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.736565 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4bz\" (UniqueName: \"kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz\") pod \"131acfa1-5305-42d1-9c00-6f0193f795a8\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.737268 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam\") pod \"131acfa1-5305-42d1-9c00-6f0193f795a8\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.737335 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory\") pod \"131acfa1-5305-42d1-9c00-6f0193f795a8\" (UID: \"131acfa1-5305-42d1-9c00-6f0193f795a8\") " Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.742676 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "131acfa1-5305-42d1-9c00-6f0193f795a8" (UID: "131acfa1-5305-42d1-9c00-6f0193f795a8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.744643 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz" (OuterVolumeSpecName: "kube-api-access-5r4bz") pod "131acfa1-5305-42d1-9c00-6f0193f795a8" (UID: "131acfa1-5305-42d1-9c00-6f0193f795a8"). InnerVolumeSpecName "kube-api-access-5r4bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.754957 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" event={"ID":"131acfa1-5305-42d1-9c00-6f0193f795a8","Type":"ContainerDied","Data":"483a7992621f246adba891071f636f0a81107fce4d9b4fc35c368f1f7fdfefb9"} Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.754999 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="483a7992621f246adba891071f636f0a81107fce4d9b4fc35c368f1f7fdfefb9" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.755079 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.790412 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory" (OuterVolumeSpecName: "inventory") pod "131acfa1-5305-42d1-9c00-6f0193f795a8" (UID: "131acfa1-5305-42d1-9c00-6f0193f795a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.848152 4896 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.848518 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4bz\" (UniqueName: \"kubernetes.io/projected/131acfa1-5305-42d1-9c00-6f0193f795a8-kube-api-access-5r4bz\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.848535 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.902556 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "131acfa1-5305-42d1-9c00-6f0193f795a8" (UID: "131acfa1-5305-42d1-9c00-6f0193f795a8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:19 crc kubenswrapper[4896]: I0126 16:07:19.952486 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/131acfa1-5305-42d1-9c00-6f0193f795a8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.099185 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.099571 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.495103 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm"] Jan 26 16:07:20 crc kubenswrapper[4896]: E0126 16:07:20.496284 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131acfa1-5305-42d1-9c00-6f0193f795a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.496313 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="131acfa1-5305-42d1-9c00-6f0193f795a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.496658 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="131acfa1-5305-42d1-9c00-6f0193f795a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.497869 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.500771 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.502688 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.502992 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.503157 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.513610 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm"] Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.568141 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.568297 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvd9\" (UniqueName: \"kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.568413 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.671917 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.672094 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvd9\" (UniqueName: \"kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.672202 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.677424 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.677680 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.694379 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvd9\" (UniqueName: \"kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vrhqm\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.762661 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.799527 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab7dd85-b4dc-45d8-ad5f-84dc75483edd" path="/var/lib/kubelet/pods/bab7dd85-b4dc-45d8-ad5f-84dc75483edd/volumes" Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.813323 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8adb8fd6-0c45-4952-9f55-64937ba92998","Type":"ContainerStarted","Data":"9a2cb68e09028e2b1e4a1700f62705b833299844ccc781bbb540b2bd83de6cfc"} Jan 26 16:07:20 crc kubenswrapper[4896]: I0126 16:07:20.835736 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:21 crc kubenswrapper[4896]: I0126 16:07:21.613736 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm"] Jan 26 16:07:21 crc kubenswrapper[4896]: I0126 16:07:21.827149 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" event={"ID":"8d223a17-39b6-4e7b-b09b-ff398113a048","Type":"ContainerStarted","Data":"5af28ee056323916a4e4e53e75f3b24a9c80bd646529e7df16f4d8a79a62eab1"} Jan 26 16:07:21 crc kubenswrapper[4896]: I0126 16:07:21.830997 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e"} Jan 26 16:07:21 crc kubenswrapper[4896]: I0126 16:07:21.837420 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8adb8fd6-0c45-4952-9f55-64937ba92998","Type":"ContainerStarted","Data":"b9aa7107753311a3a3a72206d8f924141bb913ea2731c443e729e0e415577fc0"} Jan 26 16:07:22 crc kubenswrapper[4896]: I0126 16:07:22.853839 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8adb8fd6-0c45-4952-9f55-64937ba92998","Type":"ContainerStarted","Data":"466dd5baa1bef85de5c781fda6b3c635961fcf02fe47fca165e18c19fad2481e"} Jan 26 16:07:22 crc kubenswrapper[4896]: I0126 16:07:22.856275 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" event={"ID":"8d223a17-39b6-4e7b-b09b-ff398113a048","Type":"ContainerStarted","Data":"7c67293f67a95b23ee6f9a3a0490b305114ac1bb56afdd9130c40a9ba906d387"} Jan 26 16:07:22 crc kubenswrapper[4896]: I0126 16:07:22.889774 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" podStartSLOduration=3.167878591 podStartE2EDuration="3.88975404s" podCreationTimestamp="2026-01-26 16:07:19 +0000 UTC" firstStartedPulling="2026-01-26 16:07:21.619083227 +0000 UTC m=+1999.400963620" lastFinishedPulling="2026-01-26 16:07:22.340958676 +0000 UTC m=+2000.122839069" observedRunningTime="2026-01-26 16:07:22.877525283 +0000 UTC m=+2000.659405696" watchObservedRunningTime="2026-01-26 16:07:22.88975404 +0000 UTC m=+2000.671634433" Jan 26 16:07:24 crc kubenswrapper[4896]: I0126 16:07:24.900025 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8adb8fd6-0c45-4952-9f55-64937ba92998","Type":"ContainerStarted","Data":"973396ad6406c53f584a956ff4ce28ceedeff5bf717f7740509f539bbe3fbfcb"} Jan 26 16:07:25 crc kubenswrapper[4896]: I0126 16:07:25.917265 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8adb8fd6-0c45-4952-9f55-64937ba92998","Type":"ContainerStarted","Data":"f0721b614620c242c87d7e7afc9b830b0431d0168484a58aeeb584592458cc43"} Jan 26 16:07:25 crc kubenswrapper[4896]: I0126 16:07:25.961653 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.2399999839999998 podStartE2EDuration="6.961627191s" podCreationTimestamp="2026-01-26 16:07:19 +0000 UTC" firstStartedPulling="2026-01-26 16:07:20.099339974 +0000 UTC m=+1997.881220367" lastFinishedPulling="2026-01-26 16:07:24.820967181 +0000 UTC m=+2002.602847574" observedRunningTime="2026-01-26 16:07:25.940387078 +0000 UTC m=+2003.722267471" watchObservedRunningTime="2026-01-26 16:07:25.961627191 +0000 UTC m=+2003.743507584" Jan 26 16:07:26 crc kubenswrapper[4896]: I0126 16:07:26.930212 4896 generic.go:334] "Generic (PLEG): container finished" podID="8d223a17-39b6-4e7b-b09b-ff398113a048" containerID="7c67293f67a95b23ee6f9a3a0490b305114ac1bb56afdd9130c40a9ba906d387" exitCode=0 Jan 26 16:07:26 crc kubenswrapper[4896]: I0126 16:07:26.930312 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" event={"ID":"8d223a17-39b6-4e7b-b09b-ff398113a048","Type":"ContainerDied","Data":"7c67293f67a95b23ee6f9a3a0490b305114ac1bb56afdd9130c40a9ba906d387"} Jan 26 16:07:26 crc kubenswrapper[4896]: I0126 16:07:26.935307 4896 generic.go:334] "Generic (PLEG): container finished" podID="22577788-39b3-431e-9a18-7a15b8f66045" containerID="28c56f09214c1a2ace92b8b5e3a030297230b701bbe3e07a9e099ecf52c7b1a2" exitCode=0 Jan 26 16:07:26 crc kubenswrapper[4896]: I0126 16:07:26.935434 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerDied","Data":"28c56f09214c1a2ace92b8b5e3a030297230b701bbe3e07a9e099ecf52c7b1a2"} Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.304200 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.318507 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.318670 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.318707 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.318733 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319049 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319144 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319188 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319266 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8xhd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319296 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319355 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319407 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie\") pod \"22577788-39b3-431e-9a18-7a15b8f66045\" (UID: \"22577788-39b3-431e-9a18-7a15b8f66045\") " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.319892 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.320259 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.320625 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.320976 4896 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.321003 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.321020 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.325526 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.327461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info" (OuterVolumeSpecName: "pod-info") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.327976 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.331641 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd" (OuterVolumeSpecName: "kube-api-access-k8xhd") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "kube-api-access-k8xhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.390772 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3" (OuterVolumeSpecName: "persistence") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.402776 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data" (OuterVolumeSpecName: "config-data") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.491157 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") on node \"crc\" " Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.492020 4896 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22577788-39b3-431e-9a18-7a15b8f66045-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.503757 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.503778 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8xhd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-kube-api-access-k8xhd\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.503793 4896 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22577788-39b3-431e-9a18-7a15b8f66045-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.584710 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.584879 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3") on node "crc" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.606075 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.606109 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.671491 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf" (OuterVolumeSpecName: "server-conf") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.697341 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "22577788-39b3-431e-9a18-7a15b8f66045" (UID: "22577788-39b3-431e-9a18-7a15b8f66045"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.708070 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22577788-39b3-431e-9a18-7a15b8f66045-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.708123 4896 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22577788-39b3-431e-9a18-7a15b8f66045-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.948120 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.948821 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"22577788-39b3-431e-9a18-7a15b8f66045","Type":"ContainerDied","Data":"eff34ca4bca5a2e8820146029990ef0c3f00add20843d555cab93bba335cc87f"} Jan 26 16:07:27 crc kubenswrapper[4896]: I0126 16:07:27.948895 4896 scope.go:117] "RemoveContainer" containerID="28c56f09214c1a2ace92b8b5e3a030297230b701bbe3e07a9e099ecf52c7b1a2" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.012986 4896 scope.go:117] "RemoveContainer" containerID="866e250b3dc594a32f2d37390a2c3e08821f48734dcc9202ca6c3e16478395fd" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.018535 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.056367 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.082539 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:28 crc kubenswrapper[4896]: E0126 16:07:28.085136 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="rabbitmq" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.085161 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="rabbitmq" Jan 26 16:07:28 crc kubenswrapper[4896]: E0126 16:07:28.085179 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="setup-container" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.085187 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="setup-container" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.085546 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="22577788-39b3-431e-9a18-7a15b8f66045" containerName="rabbitmq" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.090135 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.113312 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.126918 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.126982 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.127152 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.127194 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.127326 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcchk\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-kube-api-access-dcchk\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.127569 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.127746 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.128156 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.128430 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-config-data\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.128624 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.128677 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.230554 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcchk\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-kube-api-access-dcchk\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231016 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231056 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231134 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231218 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-config-data\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231296 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231336 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231388 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231472 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231534 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.231553 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.232067 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.232742 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.235992 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.236349 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.236989 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-config-data\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.237860 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.242290 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.242344 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94c85d97425b250affc0ea1c678ad89fb07fe2d358f8324bcb930f17f72e2721/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.251110 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.251627 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.251804 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.252310 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcchk\" (UniqueName: \"kubernetes.io/projected/6f7a7630-9c4f-4ff5-94c5-faa1cef560d0-kube-api-access-dcchk\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.349502 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d9d38b9-a651-47fa-a427-0c890b9beaa3\") pod \"rabbitmq-server-1\" (UID: \"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0\") " pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.446617 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.630205 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.652265 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory\") pod \"8d223a17-39b6-4e7b-b09b-ff398113a048\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.652411 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxvd9\" (UniqueName: \"kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9\") pod \"8d223a17-39b6-4e7b-b09b-ff398113a048\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.652674 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam\") pod \"8d223a17-39b6-4e7b-b09b-ff398113a048\" (UID: \"8d223a17-39b6-4e7b-b09b-ff398113a048\") " Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.666085 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9" (OuterVolumeSpecName: "kube-api-access-hxvd9") pod "8d223a17-39b6-4e7b-b09b-ff398113a048" (UID: "8d223a17-39b6-4e7b-b09b-ff398113a048"). InnerVolumeSpecName "kube-api-access-hxvd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.705190 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory" (OuterVolumeSpecName: "inventory") pod "8d223a17-39b6-4e7b-b09b-ff398113a048" (UID: "8d223a17-39b6-4e7b-b09b-ff398113a048"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.749639 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d223a17-39b6-4e7b-b09b-ff398113a048" (UID: "8d223a17-39b6-4e7b-b09b-ff398113a048"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.758687 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.758763 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxvd9\" (UniqueName: \"kubernetes.io/projected/8d223a17-39b6-4e7b-b09b-ff398113a048-kube-api-access-hxvd9\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.758779 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d223a17-39b6-4e7b-b09b-ff398113a048-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.782496 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22577788-39b3-431e-9a18-7a15b8f66045" path="/var/lib/kubelet/pods/22577788-39b3-431e-9a18-7a15b8f66045/volumes" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.965230 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" event={"ID":"8d223a17-39b6-4e7b-b09b-ff398113a048","Type":"ContainerDied","Data":"5af28ee056323916a4e4e53e75f3b24a9c80bd646529e7df16f4d8a79a62eab1"} Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.965279 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af28ee056323916a4e4e53e75f3b24a9c80bd646529e7df16f4d8a79a62eab1" Jan 26 16:07:28 crc kubenswrapper[4896]: I0126 16:07:28.965346 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vrhqm" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.049906 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 16:07:29 crc kubenswrapper[4896]: W0126 16:07:29.057517 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f7a7630_9c4f_4ff5_94c5_faa1cef560d0.slice/crio-3fd542439ddd8aca02b154fba805e4243199a062995083eed5e79ed3b838f773 WatchSource:0}: Error finding container 3fd542439ddd8aca02b154fba805e4243199a062995083eed5e79ed3b838f773: Status 404 returned error can't find the container with id 3fd542439ddd8aca02b154fba805e4243199a062995083eed5e79ed3b838f773 Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.067107 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s"] Jan 26 16:07:29 crc kubenswrapper[4896]: E0126 16:07:29.068046 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d223a17-39b6-4e7b-b09b-ff398113a048" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.068069 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d223a17-39b6-4e7b-b09b-ff398113a048" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.068545 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d223a17-39b6-4e7b-b09b-ff398113a048" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.070009 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.073769 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.074047 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.074710 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.075366 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.085718 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s"] Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.168856 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.168921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.169433 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.169823 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26k7r\" (UniqueName: \"kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.273250 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.273663 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26k7r\" (UniqueName: \"kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.273903 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.273950 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.279335 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.283913 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.289350 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.294206 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26k7r\" (UniqueName: \"kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.403025 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:07:29 crc kubenswrapper[4896]: W0126 16:07:29.962098 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53e07773_3354_4826_bcf0_41909ecb1a20.slice/crio-10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c WatchSource:0}: Error finding container 10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c: Status 404 returned error can't find the container with id 10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.965701 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s"] Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.979250 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" event={"ID":"53e07773-3354-4826-bcf0-41909ecb1a20","Type":"ContainerStarted","Data":"10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c"} Jan 26 16:07:29 crc kubenswrapper[4896]: I0126 16:07:29.981013 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0","Type":"ContainerStarted","Data":"3fd542439ddd8aca02b154fba805e4243199a062995083eed5e79ed3b838f773"} Jan 26 16:07:32 crc kubenswrapper[4896]: I0126 16:07:32.006683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" event={"ID":"53e07773-3354-4826-bcf0-41909ecb1a20","Type":"ContainerStarted","Data":"9ba36b3ccd144853c053226f3c97ac480674c24c3ceada85f484cad01c0b82c9"} Jan 26 16:07:32 crc kubenswrapper[4896]: I0126 16:07:32.010140 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0","Type":"ContainerStarted","Data":"1385df5cfbccf4e09ba75944c709e0dd191ee9534d60a823695684af63550072"} Jan 26 16:07:32 crc kubenswrapper[4896]: I0126 16:07:32.031085 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" podStartSLOduration=2.111074608 podStartE2EDuration="3.031059169s" podCreationTimestamp="2026-01-26 16:07:29 +0000 UTC" firstStartedPulling="2026-01-26 16:07:29.966260177 +0000 UTC m=+2007.748140560" lastFinishedPulling="2026-01-26 16:07:30.886244718 +0000 UTC m=+2008.668125121" observedRunningTime="2026-01-26 16:07:32.02186456 +0000 UTC m=+2009.803744953" watchObservedRunningTime="2026-01-26 16:07:32.031059169 +0000 UTC m=+2009.812939562" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.794129 4896 scope.go:117] "RemoveContainer" containerID="8440e67ab95867c0d7df93ad7f9ec8896697f06c063a76b386b8ee6c2e356319" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.827773 4896 scope.go:117] "RemoveContainer" containerID="920c46c1a87b39a70834c4fe0e1c2a403592a7ead2395483281a55fafdcdd729" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.852508 4896 scope.go:117] "RemoveContainer" containerID="f3b6acb35a7782688ba62d8a2815b1e68bcec9d750cf217a7aa2cbb4bc0e7f90" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.878386 4896 scope.go:117] "RemoveContainer" containerID="9e0b0bf388e3869092f6535c2e906143f41dd31e5b5c5d03222d8c3f0ae4654e" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.910808 4896 scope.go:117] "RemoveContainer" containerID="3f55a80ef515b8848dd1812c1d377cc9066c9e86c7581abef97d343be1a42c4f" Jan 26 16:07:36 crc kubenswrapper[4896]: I0126 16:07:36.969261 4896 scope.go:117] "RemoveContainer" containerID="2062042a5e34295a16e9261ee5602c003cb198f6c68330944d0b5b1e061b11f9" Jan 26 16:07:37 crc kubenswrapper[4896]: I0126 16:07:37.006358 4896 scope.go:117] "RemoveContainer" containerID="7a6b1609775c9d916058bff704f5dcf8bbb6a7d1dcf0cfa730d62001255b6deb" Jan 26 16:08:03 crc kubenswrapper[4896]: I0126 16:08:03.395345 4896 generic.go:334] "Generic (PLEG): container finished" podID="6f7a7630-9c4f-4ff5-94c5-faa1cef560d0" containerID="1385df5cfbccf4e09ba75944c709e0dd191ee9534d60a823695684af63550072" exitCode=0 Jan 26 16:08:03 crc kubenswrapper[4896]: I0126 16:08:03.395423 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0","Type":"ContainerDied","Data":"1385df5cfbccf4e09ba75944c709e0dd191ee9534d60a823695684af63550072"} Jan 26 16:08:04 crc kubenswrapper[4896]: I0126 16:08:04.410535 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6f7a7630-9c4f-4ff5-94c5-faa1cef560d0","Type":"ContainerStarted","Data":"6f1ed097625d3b5378b7381bf4ecee0b139a67f7bcc7d511bc037eff4da97208"} Jan 26 16:08:04 crc kubenswrapper[4896]: I0126 16:08:04.412569 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 26 16:08:04 crc kubenswrapper[4896]: I0126 16:08:04.439005 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.438977713 podStartE2EDuration="36.438977713s" podCreationTimestamp="2026-01-26 16:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:08:04.436296497 +0000 UTC m=+2042.218176890" watchObservedRunningTime="2026-01-26 16:08:04.438977713 +0000 UTC m=+2042.220858106" Jan 26 16:08:18 crc kubenswrapper[4896]: I0126 16:08:18.451833 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 26 16:08:18 crc kubenswrapper[4896]: I0126 16:08:18.507989 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:23 crc kubenswrapper[4896]: I0126 16:08:23.137516 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" containerID="cri-o://73ceff3dcd971d772975c5b6851f460cf95e1dc4c841ea2d7e11d18e8255150a" gracePeriod=604796 Jan 26 16:08:27 crc kubenswrapper[4896]: I0126 16:08:27.908067 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 26 16:08:29 crc kubenswrapper[4896]: I0126 16:08:29.769191 4896 generic.go:334] "Generic (PLEG): container finished" podID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerID="73ceff3dcd971d772975c5b6851f460cf95e1dc4c841ea2d7e11d18e8255150a" exitCode=0 Jan 26 16:08:29 crc kubenswrapper[4896]: I0126 16:08:29.769243 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerDied","Data":"73ceff3dcd971d772975c5b6851f460cf95e1dc4c841ea2d7e11d18e8255150a"} Jan 26 16:08:29 crc kubenswrapper[4896]: I0126 16:08:29.769713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45b5821a-5c82-485e-ade4-f6de2aea62d7","Type":"ContainerDied","Data":"7cf0271e8cecc0204ac73bf78d8fc3806c86d2871e6c813a3be726bb58cfa955"} Jan 26 16:08:29 crc kubenswrapper[4896]: I0126 16:08:29.769730 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf0271e8cecc0204ac73bf78d8fc3806c86d2871e6c813a3be726bb58cfa955" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.128560 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254100 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254297 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254339 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254448 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254480 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghd4l\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254494 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.254673 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.255208 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.255258 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.255281 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.255303 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data\") pod \"45b5821a-5c82-485e-ade4-f6de2aea62d7\" (UID: \"45b5821a-5c82-485e-ade4-f6de2aea62d7\") " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.256027 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.256048 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.264784 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.265061 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.269971 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.274010 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l" (OuterVolumeSpecName: "kube-api-access-ghd4l") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "kube-api-access-ghd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.282736 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info" (OuterVolumeSpecName: "pod-info") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.340772 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data" (OuterVolumeSpecName: "config-data") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.352048 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1" (OuterVolumeSpecName: "persistence") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359438 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359489 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359515 4896 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359525 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghd4l\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-kube-api-access-ghd4l\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359555 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") on node \"crc\" " Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359565 4896 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45b5821a-5c82-485e-ade4-f6de2aea62d7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359589 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.359600 4896 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45b5821a-5c82-485e-ade4-f6de2aea62d7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.516907 4896 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.528145 4896 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1") on node "crc" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.554986 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf" (OuterVolumeSpecName: "server-conf") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.606012 4896 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45b5821a-5c82-485e-ade4-f6de2aea62d7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.606048 4896 reconciler_common.go:293] "Volume detached for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.728868 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "45b5821a-5c82-485e-ade4-f6de2aea62d7" (UID: "45b5821a-5c82-485e-ade4-f6de2aea62d7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.794400 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.811294 4896 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45b5821a-5c82-485e-ade4-f6de2aea62d7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 16:08:30 crc kubenswrapper[4896]: I0126 16:08:30.858508 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.056747 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.102732 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:31 crc kubenswrapper[4896]: E0126 16:08:31.103361 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="setup-container" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.103374 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="setup-container" Jan 26 16:08:31 crc kubenswrapper[4896]: E0126 16:08:31.103430 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.103437 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.103687 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" containerName="rabbitmq" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.105114 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.141242 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.261792 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.261853 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262044 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262246 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262289 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262524 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9zl\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-kube-api-access-wx9zl\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262656 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6d746-38b7-4bbe-b01c-33ebe89f4195-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.262741 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.263072 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.263137 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6d746-38b7-4bbe-b01c-33ebe89f4195-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.263242 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365311 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365381 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6d746-38b7-4bbe-b01c-33ebe89f4195-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365450 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365498 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365529 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365615 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365685 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365716 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365778 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9zl\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-kube-api-access-wx9zl\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365821 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6d746-38b7-4bbe-b01c-33ebe89f4195-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.365850 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.366794 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.369980 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.373458 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.374051 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.374733 4896 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.374761 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdadbc0a71065fa541cba4d5492c6b8b726454b664d989b39a275b7996e6333b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.375739 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6d746-38b7-4bbe-b01c-33ebe89f4195-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.376974 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6d746-38b7-4bbe-b01c-33ebe89f4195-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.378325 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6d746-38b7-4bbe-b01c-33ebe89f4195-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.380990 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.385426 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.389423 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9zl\" (UniqueName: \"kubernetes.io/projected/13e6d746-38b7-4bbe-b01c-33ebe89f4195-kube-api-access-wx9zl\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.501031 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c39c5a7e-a72b-4177-b0c9-2e1f9fea36c1\") pod \"rabbitmq-server-0\" (UID: \"13e6d746-38b7-4bbe-b01c-33ebe89f4195\") " pod="openstack/rabbitmq-server-0" Jan 26 16:08:31 crc kubenswrapper[4896]: I0126 16:08:31.729121 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 16:08:32 crc kubenswrapper[4896]: I0126 16:08:32.782318 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b5821a-5c82-485e-ade4-f6de2aea62d7" path="/var/lib/kubelet/pods/45b5821a-5c82-485e-ade4-f6de2aea62d7/volumes" Jan 26 16:08:32 crc kubenswrapper[4896]: I0126 16:08:32.789447 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 16:08:32 crc kubenswrapper[4896]: I0126 16:08:32.852835 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6d746-38b7-4bbe-b01c-33ebe89f4195","Type":"ContainerStarted","Data":"b030ba3d35264c7862e12c27caa9779e96f2623a7bc6eded3d3d33c44df810ec"} Jan 26 16:08:34 crc kubenswrapper[4896]: I0126 16:08:34.878592 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6d746-38b7-4bbe-b01c-33ebe89f4195","Type":"ContainerStarted","Data":"14229a447c38fe6f4caca93c341edcd068bf78d94b70c640cdf441c0684a61e4"} Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.270315 4896 scope.go:117] "RemoveContainer" containerID="a4ba90cf5a8bca5df0ca77c84774e8bf53c77765127b50d6c64486d1cbbbbbe9" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.299907 4896 scope.go:117] "RemoveContainer" containerID="68f2e4b14c6a8445b80489d249bbbdd906fc152724b924165dff4e66aaf13944" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.329269 4896 scope.go:117] "RemoveContainer" containerID="1df237d1f89e97b4531d2cc854266c1a957d19fa7e42c3afddb28672df38fb57" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.350356 4896 scope.go:117] "RemoveContainer" containerID="97da7c66fb1cefd52174b393ae0e840fd46b93e14fbd5856a8ec788441018366" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.374133 4896 scope.go:117] "RemoveContainer" containerID="e0ca641dcca7c18bda9e0a084fa4137604f8a7d74af6b3f26b5c751a9b44ff23" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.409463 4896 scope.go:117] "RemoveContainer" containerID="758ae79e3f6e71ff84c487f54b312867f854bbbb4949ec8d0a1f4ca6a56ee855" Jan 26 16:08:37 crc kubenswrapper[4896]: I0126 16:08:37.437361 4896 scope.go:117] "RemoveContainer" containerID="73ceff3dcd971d772975c5b6851f460cf95e1dc4c841ea2d7e11d18e8255150a" Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.048182 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-996c-account-create-update-qfp2k"] Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.061934 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-5m2nv"] Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.073136 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-5m2nv"] Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.083684 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-996c-account-create-update-qfp2k"] Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.774158 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960a0ea6-5b48-4b87-9253-ca2c6d153b02" path="/var/lib/kubelet/pods/960a0ea6-5b48-4b87-9253-ca2c6d153b02/volumes" Jan 26 16:08:48 crc kubenswrapper[4896]: I0126 16:08:48.776263 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c" path="/var/lib/kubelet/pods/96a6a95e-2fb8-4a8c-80ca-a6fe38c4120c/volumes" Jan 26 16:08:52 crc kubenswrapper[4896]: I0126 16:08:52.162364 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vzv62"] Jan 26 16:08:52 crc kubenswrapper[4896]: I0126 16:08:52.175057 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vzv62"] Jan 26 16:08:52 crc kubenswrapper[4896]: I0126 16:08:52.772569 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d9681a-9fc0-4e0e-9d65-637d402f4c30" path="/var/lib/kubelet/pods/49d9681a-9fc0-4e0e-9d65-637d402f4c30/volumes" Jan 26 16:08:53 crc kubenswrapper[4896]: I0126 16:08:53.031677 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3dea-account-create-update-8rw6p"] Jan 26 16:08:53 crc kubenswrapper[4896]: I0126 16:08:53.044468 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3dea-account-create-update-8rw6p"] Jan 26 16:08:54 crc kubenswrapper[4896]: I0126 16:08:54.774519 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a5ea00-af46-46fb-a058-05504ad72b95" path="/var/lib/kubelet/pods/23a5ea00-af46-46fb-a058-05504ad72b95/volumes" Jan 26 16:08:57 crc kubenswrapper[4896]: I0126 16:08:57.036535 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b50a-account-create-update-dlv24"] Jan 26 16:08:57 crc kubenswrapper[4896]: I0126 16:08:57.051387 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b50a-account-create-update-dlv24"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.045261 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-242n2"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.065202 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-242n2"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.078956 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nmtbk"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.091140 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-27d9-account-create-update-52v2t"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.103466 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nmtbk"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.119062 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-27d9-account-create-update-52v2t"] Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.773815 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b87b14-02cf-4779-b5ad-964c37378a78" path="/var/lib/kubelet/pods/45b87b14-02cf-4779-b5ad-964c37378a78/volumes" Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.775920 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972e84d7-224b-4b26-b685-9f822ba2d13e" path="/var/lib/kubelet/pods/972e84d7-224b-4b26-b685-9f822ba2d13e/volumes" Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.776792 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977332a1-d90d-45ee-b202-c8cbfa2b5ab9" path="/var/lib/kubelet/pods/977332a1-d90d-45ee-b202-c8cbfa2b5ab9/volumes" Jan 26 16:08:58 crc kubenswrapper[4896]: I0126 16:08:58.777959 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b4fb8a-de2a-43d2-b233-6ff17febcd58" path="/var/lib/kubelet/pods/d0b4fb8a-de2a-43d2-b233-6ff17febcd58/volumes" Jan 26 16:08:59 crc kubenswrapper[4896]: I0126 16:08:59.049193 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-954b-account-create-update-wwrsc"] Jan 26 16:08:59 crc kubenswrapper[4896]: I0126 16:08:59.063349 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd"] Jan 26 16:08:59 crc kubenswrapper[4896]: I0126 16:08:59.078425 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-954b-account-create-update-wwrsc"] Jan 26 16:08:59 crc kubenswrapper[4896]: I0126 16:08:59.095047 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fsqhd"] Jan 26 16:09:00 crc kubenswrapper[4896]: I0126 16:09:00.788887 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb04cb5-685c-45a2-aa8e-4af430329a31" path="/var/lib/kubelet/pods/4fb04cb5-685c-45a2-aa8e-4af430329a31/volumes" Jan 26 16:09:00 crc kubenswrapper[4896]: I0126 16:09:00.790093 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7fc4b8-f53b-4377-9023-76db2caec959" path="/var/lib/kubelet/pods/cc7fc4b8-f53b-4377-9023-76db2caec959/volumes" Jan 26 16:09:07 crc kubenswrapper[4896]: I0126 16:09:07.813188 4896 generic.go:334] "Generic (PLEG): container finished" podID="13e6d746-38b7-4bbe-b01c-33ebe89f4195" containerID="14229a447c38fe6f4caca93c341edcd068bf78d94b70c640cdf441c0684a61e4" exitCode=0 Jan 26 16:09:07 crc kubenswrapper[4896]: I0126 16:09:07.813248 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6d746-38b7-4bbe-b01c-33ebe89f4195","Type":"ContainerDied","Data":"14229a447c38fe6f4caca93c341edcd068bf78d94b70c640cdf441c0684a61e4"} Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.313971 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.317277 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.328571 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.425336 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.426362 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29bc\" (UniqueName: \"kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.426446 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.529136 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.529495 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29bc\" (UniqueName: \"kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.529625 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.529743 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.529967 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.552288 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29bc\" (UniqueName: \"kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc\") pod \"redhat-operators-vhq8d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.645109 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.941001 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6d746-38b7-4bbe-b01c-33ebe89f4195","Type":"ContainerStarted","Data":"21cb4e123f5f689e5753c713158c95c07c2845ca2e8fb3ce3593f3de8a3a6711"} Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.941676 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 16:09:08 crc kubenswrapper[4896]: I0126 16:09:08.985147 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.985116901 podStartE2EDuration="38.985116901s" podCreationTimestamp="2026-01-26 16:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:09:08.982821085 +0000 UTC m=+2106.764701498" watchObservedRunningTime="2026-01-26 16:09:08.985116901 +0000 UTC m=+2106.766997294" Jan 26 16:09:09 crc kubenswrapper[4896]: W0126 16:09:09.431629 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6780d311_ad6c_4655_bedf_1f2347132d6d.slice/crio-df8505776aafa97e8726b1de0be5defab3f115ee6719386b7fef7265694b8468 WatchSource:0}: Error finding container df8505776aafa97e8726b1de0be5defab3f115ee6719386b7fef7265694b8468: Status 404 returned error can't find the container with id df8505776aafa97e8726b1de0be5defab3f115ee6719386b7fef7265694b8468 Jan 26 16:09:09 crc kubenswrapper[4896]: I0126 16:09:09.435365 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:10 crc kubenswrapper[4896]: I0126 16:09:10.092458 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerStarted","Data":"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660"} Jan 26 16:09:10 crc kubenswrapper[4896]: I0126 16:09:10.092873 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerStarted","Data":"df8505776aafa97e8726b1de0be5defab3f115ee6719386b7fef7265694b8468"} Jan 26 16:09:11 crc kubenswrapper[4896]: I0126 16:09:11.102360 4896 generic.go:334] "Generic (PLEG): container finished" podID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerID="117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660" exitCode=0 Jan 26 16:09:11 crc kubenswrapper[4896]: I0126 16:09:11.102759 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerDied","Data":"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660"} Jan 26 16:09:13 crc kubenswrapper[4896]: I0126 16:09:13.140927 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerStarted","Data":"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445"} Jan 26 16:09:18 crc kubenswrapper[4896]: I0126 16:09:18.201674 4896 generic.go:334] "Generic (PLEG): container finished" podID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerID="1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445" exitCode=0 Jan 26 16:09:18 crc kubenswrapper[4896]: I0126 16:09:18.201779 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerDied","Data":"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445"} Jan 26 16:09:19 crc kubenswrapper[4896]: I0126 16:09:19.215091 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerStarted","Data":"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a"} Jan 26 16:09:19 crc kubenswrapper[4896]: I0126 16:09:19.246126 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhq8d" podStartSLOduration=3.4545498009999998 podStartE2EDuration="11.246101849s" podCreationTimestamp="2026-01-26 16:09:08 +0000 UTC" firstStartedPulling="2026-01-26 16:09:11.105782278 +0000 UTC m=+2108.887662671" lastFinishedPulling="2026-01-26 16:09:18.897334326 +0000 UTC m=+2116.679214719" observedRunningTime="2026-01-26 16:09:19.237045728 +0000 UTC m=+2117.018926141" watchObservedRunningTime="2026-01-26 16:09:19.246101849 +0000 UTC m=+2117.027982242" Jan 26 16:09:21 crc kubenswrapper[4896]: I0126 16:09:21.732785 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 16:09:28 crc kubenswrapper[4896]: I0126 16:09:28.645550 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:28 crc kubenswrapper[4896]: I0126 16:09:28.646210 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:29 crc kubenswrapper[4896]: I0126 16:09:29.717480 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vhq8d" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="registry-server" probeResult="failure" output=< Jan 26 16:09:29 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:09:29 crc kubenswrapper[4896]: > Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.571818 4896 scope.go:117] "RemoveContainer" containerID="2b2a1864daa92e7d43f92e51a3e9bdfd27d6b26439f5420a9fbb9c6506ce0257" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.606214 4896 scope.go:117] "RemoveContainer" containerID="852b75e712e81bb34979c97a363b414bf1021386a80d18adb28c121360350bc3" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.681082 4896 scope.go:117] "RemoveContainer" containerID="c56ca6abba1d5d0eeb0b641db384f57386c67ad84f2b8ede39fd92eb954a6c72" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.751918 4896 scope.go:117] "RemoveContainer" containerID="bde3ae5ca80eef2f5df83101503947e49a2ef6a2a51e4b79dcc366f92be0790e" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.812348 4896 scope.go:117] "RemoveContainer" containerID="61001a25fea435b71b3c3679461a1f58e71b3944f65b322c53e125d6d33580c2" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.879159 4896 scope.go:117] "RemoveContainer" containerID="f91eea4c57e7b50c8a9729d9c3fa9fa94c2b186af2f11809088e7ec59518842b" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.951123 4896 scope.go:117] "RemoveContainer" containerID="acecf0179040ec537f822df704883ead9f17c823a898c9c0e776878380f580a5" Jan 26 16:09:37 crc kubenswrapper[4896]: I0126 16:09:37.976611 4896 scope.go:117] "RemoveContainer" containerID="03be9ffa1da36563734bea6e02840027fc2d5a9be20a1c822a4916150d0ef59a" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.026656 4896 scope.go:117] "RemoveContainer" containerID="48da29867d18e455f26382ec66a50212d92a4f42b472a927957177e965b1e9b6" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.082295 4896 scope.go:117] "RemoveContainer" containerID="8724dcfe44d9456fb09eda6a5c4ef5cfd8f08fcdf21528937126bd9990dacd1d" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.083095 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-61e0-account-create-update-5kcft"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.118334 4896 scope.go:117] "RemoveContainer" containerID="bbd4d0c3c4cf96143c5770748dfd3d05a9e82985a5de9a599ffbeb532d588a30" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.119832 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f638-account-create-update-7nblt"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.135628 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jbq9k"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.151354 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-6f58-account-create-update-w9947"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.162406 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fbf0-account-create-update-2mczz"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.175681 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-ksf8v"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.187502 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f638-account-create-update-7nblt"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.199739 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fbf0-account-create-update-2mczz"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.213242 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-6f58-account-create-update-w9947"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.226881 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jbq9k"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.239285 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-61e0-account-create-update-5kcft"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.251359 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-ksf8v"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.271145 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ztrfr"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.284674 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jnwlw"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.296876 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jnwlw"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.311317 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ztrfr"] Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.693663 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.749103 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.774283 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627735ca-7a3a-436c-a3fe-5634fc742384" path="/var/lib/kubelet/pods/627735ca-7a3a-436c-a3fe-5634fc742384/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.775547 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe41022-a923-4089-9328-25839bc6bc7e" path="/var/lib/kubelet/pods/abe41022-a923-4089-9328-25839bc6bc7e/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.776908 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bf55ae-99a9-403d-951d-51085bd87019" path="/var/lib/kubelet/pods/c9bf55ae-99a9-403d-951d-51085bd87019/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.779900 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa68ec3-9d3b-4584-a25c-e7682bfda2f9" path="/var/lib/kubelet/pods/cfa68ec3-9d3b-4584-a25c-e7682bfda2f9/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.781700 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db906256-70e6-4b0b-b691-9e8958a9ae3b" path="/var/lib/kubelet/pods/db906256-70e6-4b0b-b691-9e8958a9ae3b/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.782413 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a69683-ae94-4df8-bb58-83dd08d62052" path="/var/lib/kubelet/pods/e7a69683-ae94-4df8-bb58-83dd08d62052/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.783776 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f16b1b94-6103-4f26-b71f-070ea624c017" path="/var/lib/kubelet/pods/f16b1b94-6103-4f26-b71f-070ea624c017/volumes" Jan 26 16:09:38 crc kubenswrapper[4896]: I0126 16:09:38.784550 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa2e119-517e-47d8-b26e-424f2de5df1f" path="/var/lib/kubelet/pods/faa2e119-517e-47d8-b26e-424f2de5df1f/volumes" Jan 26 16:09:39 crc kubenswrapper[4896]: I0126 16:09:39.522821 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:39 crc kubenswrapper[4896]: I0126 16:09:39.989674 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vhq8d" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="registry-server" containerID="cri-o://2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a" gracePeriod=2 Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.588353 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.605874 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p29bc\" (UniqueName: \"kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc\") pod \"6780d311-ad6c-4655-bedf-1f2347132d6d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.606100 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities\") pod \"6780d311-ad6c-4655-bedf-1f2347132d6d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.606235 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content\") pod \"6780d311-ad6c-4655-bedf-1f2347132d6d\" (UID: \"6780d311-ad6c-4655-bedf-1f2347132d6d\") " Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.607016 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities" (OuterVolumeSpecName: "utilities") pod "6780d311-ad6c-4655-bedf-1f2347132d6d" (UID: "6780d311-ad6c-4655-bedf-1f2347132d6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.645653 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc" (OuterVolumeSpecName: "kube-api-access-p29bc") pod "6780d311-ad6c-4655-bedf-1f2347132d6d" (UID: "6780d311-ad6c-4655-bedf-1f2347132d6d"). InnerVolumeSpecName "kube-api-access-p29bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.712226 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p29bc\" (UniqueName: \"kubernetes.io/projected/6780d311-ad6c-4655-bedf-1f2347132d6d-kube-api-access-p29bc\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.712270 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.740204 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6780d311-ad6c-4655-bedf-1f2347132d6d" (UID: "6780d311-ad6c-4655-bedf-1f2347132d6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:09:40 crc kubenswrapper[4896]: I0126 16:09:40.815418 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6780d311-ad6c-4655-bedf-1f2347132d6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.001100 4896 generic.go:334] "Generic (PLEG): container finished" podID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerID="2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a" exitCode=0 Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.001158 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerDied","Data":"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a"} Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.001195 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhq8d" event={"ID":"6780d311-ad6c-4655-bedf-1f2347132d6d","Type":"ContainerDied","Data":"df8505776aafa97e8726b1de0be5defab3f115ee6719386b7fef7265694b8468"} Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.001212 4896 scope.go:117] "RemoveContainer" containerID="2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.001163 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhq8d" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.031736 4896 scope.go:117] "RemoveContainer" containerID="1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.033787 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.056439 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vhq8d"] Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.062277 4896 scope.go:117] "RemoveContainer" containerID="117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.126254 4896 scope.go:117] "RemoveContainer" containerID="2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a" Jan 26 16:09:41 crc kubenswrapper[4896]: E0126 16:09:41.126839 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a\": container with ID starting with 2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a not found: ID does not exist" containerID="2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.126911 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a"} err="failed to get container status \"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a\": rpc error: code = NotFound desc = could not find container \"2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a\": container with ID starting with 2f62b00aa60e15687448865235d7ed4ba6ac02a76417d6a34fc804111b6b6b0a not found: ID does not exist" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.126950 4896 scope.go:117] "RemoveContainer" containerID="1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445" Jan 26 16:09:41 crc kubenswrapper[4896]: E0126 16:09:41.127458 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445\": container with ID starting with 1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445 not found: ID does not exist" containerID="1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.127670 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445"} err="failed to get container status \"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445\": rpc error: code = NotFound desc = could not find container \"1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445\": container with ID starting with 1957f718d5dca518f47c5df45222ad145253a5d805f8ccfc44ee402f8af4c445 not found: ID does not exist" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.127827 4896 scope.go:117] "RemoveContainer" containerID="117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660" Jan 26 16:09:41 crc kubenswrapper[4896]: E0126 16:09:41.128433 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660\": container with ID starting with 117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660 not found: ID does not exist" containerID="117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660" Jan 26 16:09:41 crc kubenswrapper[4896]: I0126 16:09:41.128503 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660"} err="failed to get container status \"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660\": rpc error: code = NotFound desc = could not find container \"117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660\": container with ID starting with 117de7262eb45bec6a175a764d28096b347b3ec8b88e8648f0b2b927f99f4660 not found: ID does not exist" Jan 26 16:09:42 crc kubenswrapper[4896]: I0126 16:09:42.772143 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" path="/var/lib/kubelet/pods/6780d311-ad6c-4655-bedf-1f2347132d6d/volumes" Jan 26 16:09:45 crc kubenswrapper[4896]: I0126 16:09:45.035102 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6czxg"] Jan 26 16:09:45 crc kubenswrapper[4896]: I0126 16:09:45.049775 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6czxg"] Jan 26 16:09:46 crc kubenswrapper[4896]: I0126 16:09:46.776227 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a9be81-a6ab-4b04-9796-2f923678d8a9" path="/var/lib/kubelet/pods/38a9be81-a6ab-4b04-9796-2f923678d8a9/volumes" Jan 26 16:09:48 crc kubenswrapper[4896]: I0126 16:09:48.814327 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:09:48 crc kubenswrapper[4896]: I0126 16:09:48.814799 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:09:58 crc kubenswrapper[4896]: I0126 16:09:58.053732 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vbx2q"] Jan 26 16:09:58 crc kubenswrapper[4896]: I0126 16:09:58.067817 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vbx2q"] Jan 26 16:09:58 crc kubenswrapper[4896]: I0126 16:09:58.775835 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab8ea78-3869-4a6c-a4e7-42d593d53756" path="/var/lib/kubelet/pods/aab8ea78-3869-4a6c-a4e7-42d593d53756/volumes" Jan 26 16:10:18 crc kubenswrapper[4896]: I0126 16:10:18.814329 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:10:18 crc kubenswrapper[4896]: I0126 16:10:18.814971 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:10:30 crc kubenswrapper[4896]: I0126 16:10:30.069377 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-c6rlw"] Jan 26 16:10:30 crc kubenswrapper[4896]: I0126 16:10:30.092994 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-c6rlw"] Jan 26 16:10:30 crc kubenswrapper[4896]: I0126 16:10:30.776796 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec7e263-3178-47c9-934b-7e0f4d72aec7" path="/var/lib/kubelet/pods/1ec7e263-3178-47c9-934b-7e0f4d72aec7/volumes" Jan 26 16:10:33 crc kubenswrapper[4896]: I0126 16:10:33.032789 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-94m9x"] Jan 26 16:10:33 crc kubenswrapper[4896]: I0126 16:10:33.041387 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-94m9x"] Jan 26 16:10:34 crc kubenswrapper[4896]: I0126 16:10:34.775436 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb36972-ce84-471b-92b5-7be45e7e2d1a" path="/var/lib/kubelet/pods/bcb36972-ce84-471b-92b5-7be45e7e2d1a/volumes" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.375952 4896 scope.go:117] "RemoveContainer" containerID="2ff3fd8248bfa0e904da442fc579ca7cc674ffca199f97639e89b0121ecb2715" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.404976 4896 scope.go:117] "RemoveContainer" containerID="4a528bd9055ebac26e422c4a86e30be4c686cd01aeb5836d4073582ead139cd2" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.480765 4896 scope.go:117] "RemoveContainer" containerID="2972d6ce907361b279f028ba3a878b8accb0cc285b8b2c611e86d88dc9e62f4b" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.542916 4896 scope.go:117] "RemoveContainer" containerID="9f90d6b559368109402b7718334fdea7cd556bdd1c16efdd3300877f2013ef0f" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.599349 4896 scope.go:117] "RemoveContainer" containerID="623c709324027da09940de6723172759564f011b2d492f913cc3c8db905c5918" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.653789 4896 scope.go:117] "RemoveContainer" containerID="3fe2ff3e6b0a09ebfc8a38214bfe86e38a06b884bec01a19e545752c281c9bca" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.706509 4896 scope.go:117] "RemoveContainer" containerID="84df3adbbd9a03cc9493b1b302f0ba68c90f5a46e3976b640396850cac6157f9" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.748791 4896 scope.go:117] "RemoveContainer" containerID="6e2ad292f379b86c689ec6d09168d47a103be93f5eb891fcf01c47fd95994d1a" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.808232 4896 scope.go:117] "RemoveContainer" containerID="431899ec495623069ca44f5644f7f47296835832fee090b890fb0e63f1501d4b" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.833543 4896 scope.go:117] "RemoveContainer" containerID="a5cc9de5d83c164b47a5ceef5ba83a2d89c1f2967304f9aa3d3d82617f1ec216" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.852965 4896 scope.go:117] "RemoveContainer" containerID="42df16647bf0d946d84ec5c636b589b9b01e7d2e18399b94d2dbcf604e1123f6" Jan 26 16:10:38 crc kubenswrapper[4896]: I0126 16:10:38.880620 4896 scope.go:117] "RemoveContainer" containerID="9ea15688e5247aa9e527dc474f396b579bce378beb95f2293789e4b7285b35de" Jan 26 16:10:44 crc kubenswrapper[4896]: E0126 16:10:44.017672 4896 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.259s" Jan 26 16:10:47 crc kubenswrapper[4896]: I0126 16:10:47.038197 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bhpbc"] Jan 26 16:10:47 crc kubenswrapper[4896]: I0126 16:10:47.050676 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bhpbc"] Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.036296 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rd44b"] Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.050320 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rd44b"] Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.773094 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8e811a-f565-4dbb-846a-1a80d4832d44" path="/var/lib/kubelet/pods/0d8e811a-f565-4dbb-846a-1a80d4832d44/volumes" Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.774362 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4a2eac-2950-4747-bc43-f287adafb4e2" path="/var/lib/kubelet/pods/9b4a2eac-2950-4747-bc43-f287adafb4e2/volumes" Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.814341 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.814408 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.814454 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.815370 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:10:48 crc kubenswrapper[4896]: I0126 16:10:48.815429 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e" gracePeriod=600 Jan 26 16:10:49 crc kubenswrapper[4896]: I0126 16:10:49.076731 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e" exitCode=0 Jan 26 16:10:49 crc kubenswrapper[4896]: I0126 16:10:49.076820 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e"} Jan 26 16:10:49 crc kubenswrapper[4896]: I0126 16:10:49.077125 4896 scope.go:117] "RemoveContainer" containerID="eef508224f0cdcfb0579b0234e72c3c5503ce5cf1713a9bee24c9feccf4983cb" Jan 26 16:10:50 crc kubenswrapper[4896]: I0126 16:10:50.090009 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86"} Jan 26 16:10:51 crc kubenswrapper[4896]: I0126 16:10:51.047874 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rv5xj"] Jan 26 16:10:51 crc kubenswrapper[4896]: I0126 16:10:51.062215 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rv5xj"] Jan 26 16:10:52 crc kubenswrapper[4896]: I0126 16:10:52.773561 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0eef199-8f69-4f92-9435-ff0fd74dd854" path="/var/lib/kubelet/pods/d0eef199-8f69-4f92-9435-ff0fd74dd854/volumes" Jan 26 16:10:59 crc kubenswrapper[4896]: I0126 16:10:59.199303 4896 generic.go:334] "Generic (PLEG): container finished" podID="53e07773-3354-4826-bcf0-41909ecb1a20" containerID="9ba36b3ccd144853c053226f3c97ac480674c24c3ceada85f484cad01c0b82c9" exitCode=0 Jan 26 16:10:59 crc kubenswrapper[4896]: I0126 16:10:59.199396 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" event={"ID":"53e07773-3354-4826-bcf0-41909ecb1a20","Type":"ContainerDied","Data":"9ba36b3ccd144853c053226f3c97ac480674c24c3ceada85f484cad01c0b82c9"} Jan 26 16:11:00 crc kubenswrapper[4896]: I0126 16:11:00.906375 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.080911 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory\") pod \"53e07773-3354-4826-bcf0-41909ecb1a20\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.081079 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle\") pod \"53e07773-3354-4826-bcf0-41909ecb1a20\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.081216 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam\") pod \"53e07773-3354-4826-bcf0-41909ecb1a20\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.081281 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26k7r\" (UniqueName: \"kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r\") pod \"53e07773-3354-4826-bcf0-41909ecb1a20\" (UID: \"53e07773-3354-4826-bcf0-41909ecb1a20\") " Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.088523 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r" (OuterVolumeSpecName: "kube-api-access-26k7r") pod "53e07773-3354-4826-bcf0-41909ecb1a20" (UID: "53e07773-3354-4826-bcf0-41909ecb1a20"). InnerVolumeSpecName "kube-api-access-26k7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.088808 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "53e07773-3354-4826-bcf0-41909ecb1a20" (UID: "53e07773-3354-4826-bcf0-41909ecb1a20"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.122513 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory" (OuterVolumeSpecName: "inventory") pod "53e07773-3354-4826-bcf0-41909ecb1a20" (UID: "53e07773-3354-4826-bcf0-41909ecb1a20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.153627 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "53e07773-3354-4826-bcf0-41909ecb1a20" (UID: "53e07773-3354-4826-bcf0-41909ecb1a20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.186268 4896 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.186317 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.186327 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26k7r\" (UniqueName: \"kubernetes.io/projected/53e07773-3354-4826-bcf0-41909ecb1a20-kube-api-access-26k7r\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.186342 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/53e07773-3354-4826-bcf0-41909ecb1a20-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.221879 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" event={"ID":"53e07773-3354-4826-bcf0-41909ecb1a20","Type":"ContainerDied","Data":"10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c"} Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.221928 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10cfb75139fac3948c258b44c46ae4a2caa92cdcfee9a549062ed861f0705f4c" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.221965 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.320093 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7"] Jan 26 16:11:01 crc kubenswrapper[4896]: E0126 16:11:01.320776 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e07773-3354-4826-bcf0-41909ecb1a20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.320806 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e07773-3354-4826-bcf0-41909ecb1a20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:01 crc kubenswrapper[4896]: E0126 16:11:01.320823 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="extract-content" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.320832 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="extract-content" Jan 26 16:11:01 crc kubenswrapper[4896]: E0126 16:11:01.320863 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="registry-server" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.320873 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="registry-server" Jan 26 16:11:01 crc kubenswrapper[4896]: E0126 16:11:01.320892 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="extract-utilities" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.320901 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="extract-utilities" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.321206 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e07773-3354-4826-bcf0-41909ecb1a20" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.321270 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6780d311-ad6c-4655-bedf-1f2347132d6d" containerName="registry-server" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.322290 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.324511 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.324802 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.324882 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.328968 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.332946 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7"] Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.520760 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.520839 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2jsg\" (UniqueName: \"kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.520972 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.624558 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.625425 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.625645 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2jsg\" (UniqueName: \"kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.630041 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.631461 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.648023 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2jsg\" (UniqueName: \"kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:01 crc kubenswrapper[4896]: I0126 16:11:01.943460 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:11:02 crc kubenswrapper[4896]: I0126 16:11:02.479061 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7"] Jan 26 16:11:02 crc kubenswrapper[4896]: W0126 16:11:02.482024 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12856652_2e85_477a_aea9_3a0c04fd7b52.slice/crio-8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed WatchSource:0}: Error finding container 8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed: Status 404 returned error can't find the container with id 8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed Jan 26 16:11:03 crc kubenswrapper[4896]: I0126 16:11:03.246257 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" event={"ID":"12856652-2e85-477a-aea9-3a0c04fd7b52","Type":"ContainerStarted","Data":"bbdd61570bc5c04a2a0e5b8f13609d3537be700911181cffef605fe07ac77067"} Jan 26 16:11:03 crc kubenswrapper[4896]: I0126 16:11:03.246510 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" event={"ID":"12856652-2e85-477a-aea9-3a0c04fd7b52","Type":"ContainerStarted","Data":"8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed"} Jan 26 16:11:03 crc kubenswrapper[4896]: I0126 16:11:03.277304 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" podStartSLOduration=1.8846069650000001 podStartE2EDuration="2.277280974s" podCreationTimestamp="2026-01-26 16:11:01 +0000 UTC" firstStartedPulling="2026-01-26 16:11:02.48482307 +0000 UTC m=+2220.266703463" lastFinishedPulling="2026-01-26 16:11:02.877497079 +0000 UTC m=+2220.659377472" observedRunningTime="2026-01-26 16:11:03.266691255 +0000 UTC m=+2221.048571648" watchObservedRunningTime="2026-01-26 16:11:03.277280974 +0000 UTC m=+2221.059161367" Jan 26 16:11:15 crc kubenswrapper[4896]: I0126 16:11:15.039427 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-l5784"] Jan 26 16:11:15 crc kubenswrapper[4896]: I0126 16:11:15.053224 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-l5784"] Jan 26 16:11:16 crc kubenswrapper[4896]: I0126 16:11:16.775185 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590e8b81-a793-4143-9b0e-f2afb348dd91" path="/var/lib/kubelet/pods/590e8b81-a793-4143-9b0e-f2afb348dd91/volumes" Jan 26 16:11:39 crc kubenswrapper[4896]: I0126 16:11:39.185399 4896 scope.go:117] "RemoveContainer" containerID="8cf0b42e2aafdba5acdfad74ab5208e84b9c347fb2cf1d8075479332526cbb50" Jan 26 16:11:39 crc kubenswrapper[4896]: I0126 16:11:39.229730 4896 scope.go:117] "RemoveContainer" containerID="bc0f10791d815fdaa77458041583c06938dd033c508916aeb8a1a3783634eeac" Jan 26 16:11:39 crc kubenswrapper[4896]: I0126 16:11:39.303839 4896 scope.go:117] "RemoveContainer" containerID="08423b45173db189e8eec3cd9fdbb559bfacaa9533cf24942af9735e6ea79cc8" Jan 26 16:11:39 crc kubenswrapper[4896]: I0126 16:11:39.351244 4896 scope.go:117] "RemoveContainer" containerID="aeb3ca13e42994f58c03408cbfac03b951d5ff3efa906e5ea149f45402f5efd8" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.456989 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.461509 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.511507 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.620104 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.620184 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8ltb\" (UniqueName: \"kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.620260 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.722701 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.722811 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8ltb\" (UniqueName: \"kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.722917 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.723453 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.723486 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.745447 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8ltb\" (UniqueName: \"kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb\") pod \"redhat-marketplace-r72ng\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:55 crc kubenswrapper[4896]: I0126 16:11:55.797654 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:11:56 crc kubenswrapper[4896]: I0126 16:11:56.584888 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:11:57 crc kubenswrapper[4896]: E0126 16:11:57.113753 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe97aee_3342_4159_a2dc_1227b2e60457.slice/crio-conmon-937a1485c317f72ff05990c65b58c948bcd28a713aa2122d498d305cafec54f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe97aee_3342_4159_a2dc_1227b2e60457.slice/crio-937a1485c317f72ff05990c65b58c948bcd28a713aa2122d498d305cafec54f8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:11:57 crc kubenswrapper[4896]: I0126 16:11:57.317119 4896 generic.go:334] "Generic (PLEG): container finished" podID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerID="937a1485c317f72ff05990c65b58c948bcd28a713aa2122d498d305cafec54f8" exitCode=0 Jan 26 16:11:57 crc kubenswrapper[4896]: I0126 16:11:57.317171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerDied","Data":"937a1485c317f72ff05990c65b58c948bcd28a713aa2122d498d305cafec54f8"} Jan 26 16:11:57 crc kubenswrapper[4896]: I0126 16:11:57.317201 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerStarted","Data":"471bcb4ad6bec94b6ad9c4b12c1c40c92d9c81ad4c9317fa82b605e052ac7186"} Jan 26 16:11:59 crc kubenswrapper[4896]: I0126 16:11:59.338777 4896 generic.go:334] "Generic (PLEG): container finished" podID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerID="3078baae79cf93d28663bf14dfd94cd552fd1e09f30d01aca618e219418a8a96" exitCode=0 Jan 26 16:11:59 crc kubenswrapper[4896]: I0126 16:11:59.338880 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerDied","Data":"3078baae79cf93d28663bf14dfd94cd552fd1e09f30d01aca618e219418a8a96"} Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.812204 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.816560 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.842541 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.857035 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.857107 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.857221 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlrc\" (UniqueName: \"kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.959115 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.959173 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.959216 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlrc\" (UniqueName: \"kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.959880 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.959898 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:00 crc kubenswrapper[4896]: I0126 16:12:00.979391 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlrc\" (UniqueName: \"kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc\") pod \"certified-operators-62kc7\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:01 crc kubenswrapper[4896]: I0126 16:12:01.138215 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:01 crc kubenswrapper[4896]: I0126 16:12:01.386648 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerStarted","Data":"a5f659904c9dc9f504c96c6d44f71057ba2b2df571f8c97693221759d1c81fbf"} Jan 26 16:12:01 crc kubenswrapper[4896]: I0126 16:12:01.693231 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r72ng" podStartSLOduration=4.018104844 podStartE2EDuration="6.693188235s" podCreationTimestamp="2026-01-26 16:11:55 +0000 UTC" firstStartedPulling="2026-01-26 16:11:57.319264701 +0000 UTC m=+2275.101145094" lastFinishedPulling="2026-01-26 16:11:59.994348072 +0000 UTC m=+2277.776228485" observedRunningTime="2026-01-26 16:12:01.412980978 +0000 UTC m=+2279.194861381" watchObservedRunningTime="2026-01-26 16:12:01.693188235 +0000 UTC m=+2279.475068638" Jan 26 16:12:01 crc kubenswrapper[4896]: I0126 16:12:01.988167 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:02 crc kubenswrapper[4896]: I0126 16:12:02.398134 4896 generic.go:334] "Generic (PLEG): container finished" podID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerID="9d89a9ef52c647a8b683ba816157e799ab8af81ca0c44bd91c057673e5a4df09" exitCode=0 Jan 26 16:12:02 crc kubenswrapper[4896]: I0126 16:12:02.398239 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerDied","Data":"9d89a9ef52c647a8b683ba816157e799ab8af81ca0c44bd91c057673e5a4df09"} Jan 26 16:12:02 crc kubenswrapper[4896]: I0126 16:12:02.398510 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerStarted","Data":"fe32f4009b78395a66b4284a21d27fb51599433155d10696d0bd9eedbcf1518e"} Jan 26 16:12:03 crc kubenswrapper[4896]: I0126 16:12:03.410361 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerStarted","Data":"219c1de2d08d48f25fffc4a6acf54e1e3bc85b8c9fe5e919b84fc5c9451282ae"} Jan 26 16:12:05 crc kubenswrapper[4896]: I0126 16:12:05.435234 4896 generic.go:334] "Generic (PLEG): container finished" podID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerID="219c1de2d08d48f25fffc4a6acf54e1e3bc85b8c9fe5e919b84fc5c9451282ae" exitCode=0 Jan 26 16:12:05 crc kubenswrapper[4896]: I0126 16:12:05.435319 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerDied","Data":"219c1de2d08d48f25fffc4a6acf54e1e3bc85b8c9fe5e919b84fc5c9451282ae"} Jan 26 16:12:05 crc kubenswrapper[4896]: I0126 16:12:05.799045 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:05 crc kubenswrapper[4896]: I0126 16:12:05.799356 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:05 crc kubenswrapper[4896]: I0126 16:12:05.864566 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:06 crc kubenswrapper[4896]: I0126 16:12:06.453183 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerStarted","Data":"102754ae2cbef9911b619926391adbfb6de4e8eab30f230a62b51d701f799e75"} Jan 26 16:12:06 crc kubenswrapper[4896]: I0126 16:12:06.516847 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:06 crc kubenswrapper[4896]: I0126 16:12:06.543479 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62kc7" podStartSLOduration=3.032459586 podStartE2EDuration="6.543456684s" podCreationTimestamp="2026-01-26 16:12:00 +0000 UTC" firstStartedPulling="2026-01-26 16:12:02.400061715 +0000 UTC m=+2280.181942108" lastFinishedPulling="2026-01-26 16:12:05.911058813 +0000 UTC m=+2283.692939206" observedRunningTime="2026-01-26 16:12:06.476390004 +0000 UTC m=+2284.258270407" watchObservedRunningTime="2026-01-26 16:12:06.543456684 +0000 UTC m=+2284.325337087" Jan 26 16:12:07 crc kubenswrapper[4896]: I0126 16:12:07.403380 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:12:08 crc kubenswrapper[4896]: I0126 16:12:08.476869 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r72ng" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="registry-server" containerID="cri-o://a5f659904c9dc9f504c96c6d44f71057ba2b2df571f8c97693221759d1c81fbf" gracePeriod=2 Jan 26 16:12:09 crc kubenswrapper[4896]: I0126 16:12:09.488063 4896 generic.go:334] "Generic (PLEG): container finished" podID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerID="a5f659904c9dc9f504c96c6d44f71057ba2b2df571f8c97693221759d1c81fbf" exitCode=0 Jan 26 16:12:09 crc kubenswrapper[4896]: I0126 16:12:09.488117 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerDied","Data":"a5f659904c9dc9f504c96c6d44f71057ba2b2df571f8c97693221759d1c81fbf"} Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.157698 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.227108 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8ltb\" (UniqueName: \"kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb\") pod \"bbe97aee-3342-4159-a2dc-1227b2e60457\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.227437 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities\") pod \"bbe97aee-3342-4159-a2dc-1227b2e60457\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.227638 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content\") pod \"bbe97aee-3342-4159-a2dc-1227b2e60457\" (UID: \"bbe97aee-3342-4159-a2dc-1227b2e60457\") " Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.228667 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities" (OuterVolumeSpecName: "utilities") pod "bbe97aee-3342-4159-a2dc-1227b2e60457" (UID: "bbe97aee-3342-4159-a2dc-1227b2e60457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.233520 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb" (OuterVolumeSpecName: "kube-api-access-g8ltb") pod "bbe97aee-3342-4159-a2dc-1227b2e60457" (UID: "bbe97aee-3342-4159-a2dc-1227b2e60457"). InnerVolumeSpecName "kube-api-access-g8ltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.256246 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbe97aee-3342-4159-a2dc-1227b2e60457" (UID: "bbe97aee-3342-4159-a2dc-1227b2e60457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.331097 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8ltb\" (UniqueName: \"kubernetes.io/projected/bbe97aee-3342-4159-a2dc-1227b2e60457-kube-api-access-g8ltb\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.331124 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.331133 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe97aee-3342-4159-a2dc-1227b2e60457-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.506196 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r72ng" event={"ID":"bbe97aee-3342-4159-a2dc-1227b2e60457","Type":"ContainerDied","Data":"471bcb4ad6bec94b6ad9c4b12c1c40c92d9c81ad4c9317fa82b605e052ac7186"} Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.506569 4896 scope.go:117] "RemoveContainer" containerID="a5f659904c9dc9f504c96c6d44f71057ba2b2df571f8c97693221759d1c81fbf" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.506252 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r72ng" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.547078 4896 scope.go:117] "RemoveContainer" containerID="3078baae79cf93d28663bf14dfd94cd552fd1e09f30d01aca618e219418a8a96" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.547913 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.559805 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r72ng"] Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.570825 4896 scope.go:117] "RemoveContainer" containerID="937a1485c317f72ff05990c65b58c948bcd28a713aa2122d498d305cafec54f8" Jan 26 16:12:10 crc kubenswrapper[4896]: I0126 16:12:10.790968 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" path="/var/lib/kubelet/pods/bbe97aee-3342-4159-a2dc-1227b2e60457/volumes" Jan 26 16:12:11 crc kubenswrapper[4896]: I0126 16:12:11.138632 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:11 crc kubenswrapper[4896]: I0126 16:12:11.138694 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:11 crc kubenswrapper[4896]: I0126 16:12:11.192420 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:11 crc kubenswrapper[4896]: I0126 16:12:11.568705 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:13 crc kubenswrapper[4896]: I0126 16:12:13.404478 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:13 crc kubenswrapper[4896]: I0126 16:12:13.541173 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62kc7" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="registry-server" containerID="cri-o://102754ae2cbef9911b619926391adbfb6de4e8eab30f230a62b51d701f799e75" gracePeriod=2 Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.559631 4896 generic.go:334] "Generic (PLEG): container finished" podID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerID="102754ae2cbef9911b619926391adbfb6de4e8eab30f230a62b51d701f799e75" exitCode=0 Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.559666 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerDied","Data":"102754ae2cbef9911b619926391adbfb6de4e8eab30f230a62b51d701f799e75"} Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.856600 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.947333 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities\") pod \"e3b66f66-5866-4213-9f90-6cbe27f7357e\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.947762 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content\") pod \"e3b66f66-5866-4213-9f90-6cbe27f7357e\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.947793 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjlrc\" (UniqueName: \"kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc\") pod \"e3b66f66-5866-4213-9f90-6cbe27f7357e\" (UID: \"e3b66f66-5866-4213-9f90-6cbe27f7357e\") " Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.948357 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities" (OuterVolumeSpecName: "utilities") pod "e3b66f66-5866-4213-9f90-6cbe27f7357e" (UID: "e3b66f66-5866-4213-9f90-6cbe27f7357e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.949035 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.953790 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc" (OuterVolumeSpecName: "kube-api-access-rjlrc") pod "e3b66f66-5866-4213-9f90-6cbe27f7357e" (UID: "e3b66f66-5866-4213-9f90-6cbe27f7357e"). InnerVolumeSpecName "kube-api-access-rjlrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:12:14 crc kubenswrapper[4896]: I0126 16:12:14.997565 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3b66f66-5866-4213-9f90-6cbe27f7357e" (UID: "e3b66f66-5866-4213-9f90-6cbe27f7357e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.050730 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b66f66-5866-4213-9f90-6cbe27f7357e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.050762 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjlrc\" (UniqueName: \"kubernetes.io/projected/e3b66f66-5866-4213-9f90-6cbe27f7357e-kube-api-access-rjlrc\") on node \"crc\" DevicePath \"\"" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.576398 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kc7" event={"ID":"e3b66f66-5866-4213-9f90-6cbe27f7357e","Type":"ContainerDied","Data":"fe32f4009b78395a66b4284a21d27fb51599433155d10696d0bd9eedbcf1518e"} Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.577015 4896 scope.go:117] "RemoveContainer" containerID="102754ae2cbef9911b619926391adbfb6de4e8eab30f230a62b51d701f799e75" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.576488 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kc7" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.604931 4896 scope.go:117] "RemoveContainer" containerID="219c1de2d08d48f25fffc4a6acf54e1e3bc85b8c9fe5e919b84fc5c9451282ae" Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.628407 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.641465 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62kc7"] Jan 26 16:12:15 crc kubenswrapper[4896]: I0126 16:12:15.655038 4896 scope.go:117] "RemoveContainer" containerID="9d89a9ef52c647a8b683ba816157e799ab8af81ca0c44bd91c057673e5a4df09" Jan 26 16:12:16 crc kubenswrapper[4896]: I0126 16:12:16.772088 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" path="/var/lib/kubelet/pods/e3b66f66-5866-4213-9f90-6cbe27f7357e/volumes" Jan 26 16:12:22 crc kubenswrapper[4896]: I0126 16:12:22.050232 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bb9fc"] Jan 26 16:12:22 crc kubenswrapper[4896]: I0126 16:12:22.062875 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bb9fc"] Jan 26 16:12:22 crc kubenswrapper[4896]: I0126 16:12:22.777001 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6e5a4d-74d6-414a-ace8-b322f12e7e4c" path="/var/lib/kubelet/pods/6e6e5a4d-74d6-414a-ace8-b322f12e7e4c/volumes" Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.036252 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qw854"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.051473 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-df69-account-create-update-rns9n"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.063977 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zmxcl"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.076473 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-93c3-account-create-update-rpzrm"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.088145 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-df69-account-create-update-rns9n"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.099247 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qw854"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.113702 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zmxcl"] Jan 26 16:12:25 crc kubenswrapper[4896]: I0126 16:12:25.128249 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-93c3-account-create-update-rpzrm"] Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.037808 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0553-account-create-update-swsct"] Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.049000 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0553-account-create-update-swsct"] Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.774000 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5714bb-8439-4dbc-974b-bf04c6537695" path="/var/lib/kubelet/pods/0a5714bb-8439-4dbc-974b-bf04c6537695/volumes" Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.775508 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db41110-40d9-49ad-9ba5-a7e94e433693" path="/var/lib/kubelet/pods/3db41110-40d9-49ad-9ba5-a7e94e433693/volumes" Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.776504 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5925428c-5669-44dc-92f7-e182c113fb11" path="/var/lib/kubelet/pods/5925428c-5669-44dc-92f7-e182c113fb11/volumes" Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.777896 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8531512a-bfdc-47ff-ae60-182536cad417" path="/var/lib/kubelet/pods/8531512a-bfdc-47ff-ae60-182536cad417/volumes" Jan 26 16:12:26 crc kubenswrapper[4896]: I0126 16:12:26.779227 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c385b05e-d56d-4b07-8d3a-f96399936528" path="/var/lib/kubelet/pods/c385b05e-d56d-4b07-8d3a-f96399936528/volumes" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.507834 4896 scope.go:117] "RemoveContainer" containerID="b5a99a991f6040c9e0cac7f27ec3b92595a19e6aa8097a696c59e5370b3aeb5b" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.538034 4896 scope.go:117] "RemoveContainer" containerID="f9a58465d086247e3ce30aac15fa075fdc17bf15810a29a7c425c83a2099413e" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.601171 4896 scope.go:117] "RemoveContainer" containerID="5944fcedd2dd4c50c8358ea854e84f386c5952064b900a4576dd43bab0b3adc7" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.671258 4896 scope.go:117] "RemoveContainer" containerID="8b084230e4b5aa388cb0ae8cbf9e5ad60b3e9fed9dc4e0723faf2e96507b9172" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.744332 4896 scope.go:117] "RemoveContainer" containerID="68e13b2d9bb3451f72cc0ec487ebe7fcb141cf8af1be009a85aad16281f1e20b" Jan 26 16:12:39 crc kubenswrapper[4896]: I0126 16:12:39.787496 4896 scope.go:117] "RemoveContainer" containerID="a40c694d41194e0ff6ef2f4c4f88bd562f14e6c57b9ee7d7257e7bcd9e41c2bb" Jan 26 16:13:00 crc kubenswrapper[4896]: I0126 16:13:00.052511 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m5nn6"] Jan 26 16:13:00 crc kubenswrapper[4896]: I0126 16:13:00.077833 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m5nn6"] Jan 26 16:13:00 crc kubenswrapper[4896]: I0126 16:13:00.775747 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff0b31b-d778-4b0c-aa5a-c8b42d08e462" path="/var/lib/kubelet/pods/aff0b31b-d778-4b0c-aa5a-c8b42d08e462/volumes" Jan 26 16:13:08 crc kubenswrapper[4896]: I0126 16:13:08.471004 4896 generic.go:334] "Generic (PLEG): container finished" podID="12856652-2e85-477a-aea9-3a0c04fd7b52" containerID="bbdd61570bc5c04a2a0e5b8f13609d3537be700911181cffef605fe07ac77067" exitCode=0 Jan 26 16:13:08 crc kubenswrapper[4896]: I0126 16:13:08.471139 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" event={"ID":"12856652-2e85-477a-aea9-3a0c04fd7b52","Type":"ContainerDied","Data":"bbdd61570bc5c04a2a0e5b8f13609d3537be700911181cffef605fe07ac77067"} Jan 26 16:13:09 crc kubenswrapper[4896]: I0126 16:13:09.996880 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.136033 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory\") pod \"12856652-2e85-477a-aea9-3a0c04fd7b52\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.136119 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam\") pod \"12856652-2e85-477a-aea9-3a0c04fd7b52\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.136392 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2jsg\" (UniqueName: \"kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg\") pod \"12856652-2e85-477a-aea9-3a0c04fd7b52\" (UID: \"12856652-2e85-477a-aea9-3a0c04fd7b52\") " Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.142197 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg" (OuterVolumeSpecName: "kube-api-access-q2jsg") pod "12856652-2e85-477a-aea9-3a0c04fd7b52" (UID: "12856652-2e85-477a-aea9-3a0c04fd7b52"). InnerVolumeSpecName "kube-api-access-q2jsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.170104 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory" (OuterVolumeSpecName: "inventory") pod "12856652-2e85-477a-aea9-3a0c04fd7b52" (UID: "12856652-2e85-477a-aea9-3a0c04fd7b52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.175373 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12856652-2e85-477a-aea9-3a0c04fd7b52" (UID: "12856652-2e85-477a-aea9-3a0c04fd7b52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.239439 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.239478 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12856652-2e85-477a-aea9-3a0c04fd7b52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.239492 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2jsg\" (UniqueName: \"kubernetes.io/projected/12856652-2e85-477a-aea9-3a0c04fd7b52-kube-api-access-q2jsg\") on node \"crc\" DevicePath \"\"" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.493980 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" event={"ID":"12856652-2e85-477a-aea9-3a0c04fd7b52","Type":"ContainerDied","Data":"8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed"} Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.494311 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a27f402b4e3f028e31694cae37e003f6883a7600a55ec642b9d983dcdb423ed" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.494073 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.606345 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r"] Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.606918 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="extract-utilities" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.606936 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="extract-utilities" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.606959 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="extract-content" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.606968 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="extract-content" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.606989 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="extract-content" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.606995 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="extract-content" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.607014 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12856652-2e85-477a-aea9-3a0c04fd7b52" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607020 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="12856652-2e85-477a-aea9-3a0c04fd7b52" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.607035 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607042 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.607051 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607059 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: E0126 16:13:10.607078 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="extract-utilities" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607084 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="extract-utilities" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607337 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe97aee-3342-4159-a2dc-1227b2e60457" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607355 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b66f66-5866-4213-9f90-6cbe27f7357e" containerName="registry-server" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.607371 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="12856652-2e85-477a-aea9-3a0c04fd7b52" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.608272 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.614611 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.614615 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.617318 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.617712 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.626225 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r"] Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.753332 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.753458 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxltx\" (UniqueName: \"kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.753727 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.856002 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.856149 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxltx\" (UniqueName: \"kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.857464 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.863201 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.872171 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.873488 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxltx\" (UniqueName: \"kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:10 crc kubenswrapper[4896]: I0126 16:13:10.929980 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:13:11 crc kubenswrapper[4896]: I0126 16:13:11.628267 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r"] Jan 26 16:13:11 crc kubenswrapper[4896]: I0126 16:13:11.639116 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:13:12 crc kubenswrapper[4896]: I0126 16:13:12.520159 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" event={"ID":"b80b385e-edbf-441a-af52-a5a03f29d78c","Type":"ContainerStarted","Data":"ac004a1e6629879e6fa639887b43ff72bec76704f337ed5d51b6dcca0b6a6f58"} Jan 26 16:13:13 crc kubenswrapper[4896]: I0126 16:13:13.552181 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" event={"ID":"b80b385e-edbf-441a-af52-a5a03f29d78c","Type":"ContainerStarted","Data":"2c86fad0e22d67a0a6203fa709e6fc710cc61112baa0d389bcf6ef8d337335a8"} Jan 26 16:13:13 crc kubenswrapper[4896]: I0126 16:13:13.591200 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" podStartSLOduration=2.9560092940000002 podStartE2EDuration="3.591179285s" podCreationTimestamp="2026-01-26 16:13:10 +0000 UTC" firstStartedPulling="2026-01-26 16:13:11.638839312 +0000 UTC m=+2349.420719705" lastFinishedPulling="2026-01-26 16:13:12.274009303 +0000 UTC m=+2350.055889696" observedRunningTime="2026-01-26 16:13:13.573968267 +0000 UTC m=+2351.355848660" watchObservedRunningTime="2026-01-26 16:13:13.591179285 +0000 UTC m=+2351.373059678" Jan 26 16:13:16 crc kubenswrapper[4896]: I0126 16:13:16.032915 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gf8rp"] Jan 26 16:13:16 crc kubenswrapper[4896]: I0126 16:13:16.046547 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gf8rp"] Jan 26 16:13:16 crc kubenswrapper[4896]: I0126 16:13:16.774440 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba1a60c-8108-4ef6-a04c-c30d77f58d51" path="/var/lib/kubelet/pods/2ba1a60c-8108-4ef6-a04c-c30d77f58d51/volumes" Jan 26 16:13:18 crc kubenswrapper[4896]: I0126 16:13:18.027844 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-30c4-account-create-update-s2cd2"] Jan 26 16:13:18 crc kubenswrapper[4896]: I0126 16:13:18.039336 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-30c4-account-create-update-s2cd2"] Jan 26 16:13:18 crc kubenswrapper[4896]: I0126 16:13:18.773922 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801" path="/var/lib/kubelet/pods/f9ce9a3a-5be5-4fa9-85ce-530c8f8cb801/volumes" Jan 26 16:13:18 crc kubenswrapper[4896]: I0126 16:13:18.813464 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:13:18 crc kubenswrapper[4896]: I0126 16:13:18.813540 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:13:24 crc kubenswrapper[4896]: I0126 16:13:24.051690 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-g22d4"] Jan 26 16:13:24 crc kubenswrapper[4896]: I0126 16:13:24.063619 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-g22d4"] Jan 26 16:13:24 crc kubenswrapper[4896]: I0126 16:13:24.771762 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab180a60-67ff-4295-8120-a9abca520ee8" path="/var/lib/kubelet/pods/ab180a60-67ff-4295-8120-a9abca520ee8/volumes" Jan 26 16:13:30 crc kubenswrapper[4896]: I0126 16:13:30.036024 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4r45x"] Jan 26 16:13:30 crc kubenswrapper[4896]: I0126 16:13:30.047927 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4r45x"] Jan 26 16:13:30 crc kubenswrapper[4896]: I0126 16:13:30.778371 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a4442a-a1a0-4827-bba5-7c8a3ea1e80a" path="/var/lib/kubelet/pods/68a4442a-a1a0-4827-bba5-7c8a3ea1e80a/volumes" Jan 26 16:13:39 crc kubenswrapper[4896]: I0126 16:13:39.992856 4896 scope.go:117] "RemoveContainer" containerID="85efda1bab1bcebe3dde1ed1984a3d6aaf3b8b090dadceebcc9561c50676df3c" Jan 26 16:13:40 crc kubenswrapper[4896]: I0126 16:13:40.028281 4896 scope.go:117] "RemoveContainer" containerID="02260fe602edd838d509d51a7a0433b3c4ad3f1cc6d647aaa436ca5960419349" Jan 26 16:13:40 crc kubenswrapper[4896]: I0126 16:13:40.092387 4896 scope.go:117] "RemoveContainer" containerID="d5de8a82fdf3faccd6e468c25242c16e1dbc155fd5c98b21006b7d646a92cd38" Jan 26 16:13:40 crc kubenswrapper[4896]: I0126 16:13:40.148687 4896 scope.go:117] "RemoveContainer" containerID="2a965a7e2ded6a062041eef6de0aee5fd40607e569ed099149be39e866fc5dff" Jan 26 16:13:40 crc kubenswrapper[4896]: I0126 16:13:40.233898 4896 scope.go:117] "RemoveContainer" containerID="044689026bb29f35af87ff22dcb3b205ef9ab1cd408d4aa40301a391b4d6aa16" Jan 26 16:13:48 crc kubenswrapper[4896]: I0126 16:13:48.813833 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:13:48 crc kubenswrapper[4896]: I0126 16:13:48.814317 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:14:14 crc kubenswrapper[4896]: I0126 16:14:14.052683 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bg2nz"] Jan 26 16:14:14 crc kubenswrapper[4896]: I0126 16:14:14.068751 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bg2nz"] Jan 26 16:14:14 crc kubenswrapper[4896]: I0126 16:14:14.819820 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3debd8-ed65-47af-a900-21fd61003d8b" path="/var/lib/kubelet/pods/9a3debd8-ed65-47af-a900-21fd61003d8b/volumes" Jan 26 16:14:18 crc kubenswrapper[4896]: I0126 16:14:18.813933 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:14:18 crc kubenswrapper[4896]: I0126 16:14:18.814603 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:14:18 crc kubenswrapper[4896]: I0126 16:14:18.814664 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:14:18 crc kubenswrapper[4896]: I0126 16:14:18.816053 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:14:18 crc kubenswrapper[4896]: I0126 16:14:18.816115 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" gracePeriod=600 Jan 26 16:14:18 crc kubenswrapper[4896]: E0126 16:14:18.945173 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:14:19 crc kubenswrapper[4896]: I0126 16:14:19.283279 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" exitCode=0 Jan 26 16:14:19 crc kubenswrapper[4896]: I0126 16:14:19.283349 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86"} Jan 26 16:14:19 crc kubenswrapper[4896]: I0126 16:14:19.283398 4896 scope.go:117] "RemoveContainer" containerID="34018192ce9be7ec2fb8dce54a3f8597501bed5661ca078c83367c7d8b68b65e" Jan 26 16:14:19 crc kubenswrapper[4896]: I0126 16:14:19.284334 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:14:19 crc kubenswrapper[4896]: E0126 16:14:19.284778 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:14:28 crc kubenswrapper[4896]: I0126 16:14:28.385700 4896 generic.go:334] "Generic (PLEG): container finished" podID="b80b385e-edbf-441a-af52-a5a03f29d78c" containerID="2c86fad0e22d67a0a6203fa709e6fc710cc61112baa0d389bcf6ef8d337335a8" exitCode=0 Jan 26 16:14:28 crc kubenswrapper[4896]: I0126 16:14:28.385801 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" event={"ID":"b80b385e-edbf-441a-af52-a5a03f29d78c","Type":"ContainerDied","Data":"2c86fad0e22d67a0a6203fa709e6fc710cc61112baa0d389bcf6ef8d337335a8"} Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.156160 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.209216 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxltx\" (UniqueName: \"kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx\") pod \"b80b385e-edbf-441a-af52-a5a03f29d78c\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.209314 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory\") pod \"b80b385e-edbf-441a-af52-a5a03f29d78c\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.209527 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam\") pod \"b80b385e-edbf-441a-af52-a5a03f29d78c\" (UID: \"b80b385e-edbf-441a-af52-a5a03f29d78c\") " Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.244954 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx" (OuterVolumeSpecName: "kube-api-access-cxltx") pod "b80b385e-edbf-441a-af52-a5a03f29d78c" (UID: "b80b385e-edbf-441a-af52-a5a03f29d78c"). InnerVolumeSpecName "kube-api-access-cxltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.253493 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory" (OuterVolumeSpecName: "inventory") pod "b80b385e-edbf-441a-af52-a5a03f29d78c" (UID: "b80b385e-edbf-441a-af52-a5a03f29d78c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.262424 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b80b385e-edbf-441a-af52-a5a03f29d78c" (UID: "b80b385e-edbf-441a-af52-a5a03f29d78c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.312263 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.312302 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxltx\" (UniqueName: \"kubernetes.io/projected/b80b385e-edbf-441a-af52-a5a03f29d78c-kube-api-access-cxltx\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.312317 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b80b385e-edbf-441a-af52-a5a03f29d78c-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.412117 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" event={"ID":"b80b385e-edbf-441a-af52-a5a03f29d78c","Type":"ContainerDied","Data":"ac004a1e6629879e6fa639887b43ff72bec76704f337ed5d51b6dcca0b6a6f58"} Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.412165 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.412169 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac004a1e6629879e6fa639887b43ff72bec76704f337ed5d51b6dcca0b6a6f58" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.508795 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7"] Jan 26 16:14:30 crc kubenswrapper[4896]: E0126 16:14:30.509344 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80b385e-edbf-441a-af52-a5a03f29d78c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.509370 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80b385e-edbf-441a-af52-a5a03f29d78c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.509677 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80b385e-edbf-441a-af52-a5a03f29d78c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.510729 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.514697 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.514724 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.514993 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.515195 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.521498 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7"] Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.618269 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttwlt\" (UniqueName: \"kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.618490 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.618538 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.721056 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.721148 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.721233 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttwlt\" (UniqueName: \"kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.727019 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.853110 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:30 crc kubenswrapper[4896]: I0126 16:14:30.859176 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttwlt\" (UniqueName: \"kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:31 crc kubenswrapper[4896]: I0126 16:14:31.130560 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:31 crc kubenswrapper[4896]: I0126 16:14:31.760978 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7"] Jan 26 16:14:32 crc kubenswrapper[4896]: I0126 16:14:32.438053 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" event={"ID":"a5325724-2408-4ea5-a21b-7208a9d8a1c8","Type":"ContainerStarted","Data":"6c5610dcbe883adc1d01e4b945c2dfdf3dabc57615f110dc2b715d036100e1bf"} Jan 26 16:14:32 crc kubenswrapper[4896]: I0126 16:14:32.775926 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:14:32 crc kubenswrapper[4896]: E0126 16:14:32.776639 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:14:33 crc kubenswrapper[4896]: I0126 16:14:33.450483 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" event={"ID":"a5325724-2408-4ea5-a21b-7208a9d8a1c8","Type":"ContainerStarted","Data":"37ff00c3eab145a267daae24db89738f0541919f750322c23fc3acf223832656"} Jan 26 16:14:38 crc kubenswrapper[4896]: I0126 16:14:38.509220 4896 generic.go:334] "Generic (PLEG): container finished" podID="a5325724-2408-4ea5-a21b-7208a9d8a1c8" containerID="37ff00c3eab145a267daae24db89738f0541919f750322c23fc3acf223832656" exitCode=0 Jan 26 16:14:38 crc kubenswrapper[4896]: I0126 16:14:38.509291 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" event={"ID":"a5325724-2408-4ea5-a21b-7208a9d8a1c8","Type":"ContainerDied","Data":"37ff00c3eab145a267daae24db89738f0541919f750322c23fc3acf223832656"} Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.130948 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.164714 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttwlt\" (UniqueName: \"kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt\") pod \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.164808 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam\") pod \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.164931 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") pod \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.171945 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt" (OuterVolumeSpecName: "kube-api-access-ttwlt") pod "a5325724-2408-4ea5-a21b-7208a9d8a1c8" (UID: "a5325724-2408-4ea5-a21b-7208a9d8a1c8"). InnerVolumeSpecName "kube-api-access-ttwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:14:40 crc kubenswrapper[4896]: E0126 16:14:40.209906 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory podName:a5325724-2408-4ea5-a21b-7208a9d8a1c8 nodeName:}" failed. No retries permitted until 2026-01-26 16:14:40.709857836 +0000 UTC m=+2438.491738229 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory") pod "a5325724-2408-4ea5-a21b-7208a9d8a1c8" (UID: "a5325724-2408-4ea5-a21b-7208a9d8a1c8") : error deleting /var/lib/kubelet/pods/a5325724-2408-4ea5-a21b-7208a9d8a1c8/volume-subpaths: remove /var/lib/kubelet/pods/a5325724-2408-4ea5-a21b-7208a9d8a1c8/volume-subpaths: no such file or directory Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.213705 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a5325724-2408-4ea5-a21b-7208a9d8a1c8" (UID: "a5325724-2408-4ea5-a21b-7208a9d8a1c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.268106 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttwlt\" (UniqueName: \"kubernetes.io/projected/a5325724-2408-4ea5-a21b-7208a9d8a1c8-kube-api-access-ttwlt\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.268144 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.397760 4896 scope.go:117] "RemoveContainer" containerID="cf297041519e868a88a3b452f177b093d699aa5e1b3b77b6837c3bee79bba189" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.558971 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" event={"ID":"a5325724-2408-4ea5-a21b-7208a9d8a1c8","Type":"ContainerDied","Data":"6c5610dcbe883adc1d01e4b945c2dfdf3dabc57615f110dc2b715d036100e1bf"} Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.559041 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.559087 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c5610dcbe883adc1d01e4b945c2dfdf3dabc57615f110dc2b715d036100e1bf" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.881284 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") pod \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\" (UID: \"a5325724-2408-4ea5-a21b-7208a9d8a1c8\") " Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.893916 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory" (OuterVolumeSpecName: "inventory") pod "a5325724-2408-4ea5-a21b-7208a9d8a1c8" (UID: "a5325724-2408-4ea5-a21b-7208a9d8a1c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.932353 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj"] Jan 26 16:14:40 crc kubenswrapper[4896]: E0126 16:14:40.933091 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5325724-2408-4ea5-a21b-7208a9d8a1c8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.933204 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5325724-2408-4ea5-a21b-7208a9d8a1c8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.933523 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5325724-2408-4ea5-a21b-7208a9d8a1c8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.934629 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.970311 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj"] Jan 26 16:14:40 crc kubenswrapper[4896]: I0126 16:14:40.991240 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5325724-2408-4ea5-a21b-7208a9d8a1c8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.093339 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjhpj\" (UniqueName: \"kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.093647 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.093854 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.196127 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.196314 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjhpj\" (UniqueName: \"kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.196401 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.201841 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.210230 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.218061 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjhpj\" (UniqueName: \"kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2w2vj\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:41 crc kubenswrapper[4896]: I0126 16:14:41.321888 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:14:42 crc kubenswrapper[4896]: I0126 16:14:42.036843 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj"] Jan 26 16:14:42 crc kubenswrapper[4896]: I0126 16:14:42.596683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" event={"ID":"69f568f0-3460-4b8e-8ffa-1f73312e7696","Type":"ContainerStarted","Data":"d9f643552e42505dfe623d05245142c3ea862b1ea340c66dbfdd6814e357cd4e"} Jan 26 16:14:43 crc kubenswrapper[4896]: I0126 16:14:43.611056 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" event={"ID":"69f568f0-3460-4b8e-8ffa-1f73312e7696","Type":"ContainerStarted","Data":"2e1265f6833947ee388cf83a22b04f37c4e04734b8d3f62e82c103170cd93679"} Jan 26 16:14:43 crc kubenswrapper[4896]: I0126 16:14:43.633916 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" podStartSLOduration=2.5071256760000002 podStartE2EDuration="3.633883209s" podCreationTimestamp="2026-01-26 16:14:40 +0000 UTC" firstStartedPulling="2026-01-26 16:14:42.037886254 +0000 UTC m=+2439.819766647" lastFinishedPulling="2026-01-26 16:14:43.164643787 +0000 UTC m=+2440.946524180" observedRunningTime="2026-01-26 16:14:43.63141865 +0000 UTC m=+2441.413299043" watchObservedRunningTime="2026-01-26 16:14:43.633883209 +0000 UTC m=+2441.415763602" Jan 26 16:14:46 crc kubenswrapper[4896]: I0126 16:14:46.759905 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:14:46 crc kubenswrapper[4896]: E0126 16:14:46.761964 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:14:58 crc kubenswrapper[4896]: I0126 16:14:58.760162 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:14:58 crc kubenswrapper[4896]: E0126 16:14:58.760987 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.168751 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck"] Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.170767 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.175058 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.175258 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.181392 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck"] Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.319491 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bld\" (UniqueName: \"kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.319534 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.320426 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.422407 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45bld\" (UniqueName: \"kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.422474 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.422512 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.423471 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.430260 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.439696 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45bld\" (UniqueName: \"kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld\") pod \"collect-profiles-29490735-cs5ck\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:00 crc kubenswrapper[4896]: I0126 16:15:00.497206 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:01 crc kubenswrapper[4896]: I0126 16:15:01.062713 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck"] Jan 26 16:15:01 crc kubenswrapper[4896]: W0126 16:15:01.067937 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f8abe2_9470_4da3_9a2e_b0d73355d416.slice/crio-270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd WatchSource:0}: Error finding container 270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd: Status 404 returned error can't find the container with id 270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd Jan 26 16:15:01 crc kubenswrapper[4896]: I0126 16:15:01.139339 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" event={"ID":"07f8abe2-9470-4da3-9a2e-b0d73355d416","Type":"ContainerStarted","Data":"270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd"} Jan 26 16:15:02 crc kubenswrapper[4896]: I0126 16:15:02.153601 4896 generic.go:334] "Generic (PLEG): container finished" podID="07f8abe2-9470-4da3-9a2e-b0d73355d416" containerID="c2a7b37c49a1c7d9a1de5c402ecba7c2fb9d975ab723abae6ad0687616eb36aa" exitCode=0 Jan 26 16:15:02 crc kubenswrapper[4896]: I0126 16:15:02.153888 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" event={"ID":"07f8abe2-9470-4da3-9a2e-b0d73355d416","Type":"ContainerDied","Data":"c2a7b37c49a1c7d9a1de5c402ecba7c2fb9d975ab723abae6ad0687616eb36aa"} Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.673017 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.786521 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45bld\" (UniqueName: \"kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld\") pod \"07f8abe2-9470-4da3-9a2e-b0d73355d416\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.786653 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume\") pod \"07f8abe2-9470-4da3-9a2e-b0d73355d416\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.786703 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume\") pod \"07f8abe2-9470-4da3-9a2e-b0d73355d416\" (UID: \"07f8abe2-9470-4da3-9a2e-b0d73355d416\") " Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.787342 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume" (OuterVolumeSpecName: "config-volume") pod "07f8abe2-9470-4da3-9a2e-b0d73355d416" (UID: "07f8abe2-9470-4da3-9a2e-b0d73355d416"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.787510 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8abe2-9470-4da3-9a2e-b0d73355d416-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.792861 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld" (OuterVolumeSpecName: "kube-api-access-45bld") pod "07f8abe2-9470-4da3-9a2e-b0d73355d416" (UID: "07f8abe2-9470-4da3-9a2e-b0d73355d416"). InnerVolumeSpecName "kube-api-access-45bld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.796715 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "07f8abe2-9470-4da3-9a2e-b0d73355d416" (UID: "07f8abe2-9470-4da3-9a2e-b0d73355d416"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.889699 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07f8abe2-9470-4da3-9a2e-b0d73355d416-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4896]: I0126 16:15:03.889730 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45bld\" (UniqueName: \"kubernetes.io/projected/07f8abe2-9470-4da3-9a2e-b0d73355d416-kube-api-access-45bld\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:04 crc kubenswrapper[4896]: I0126 16:15:04.201173 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" event={"ID":"07f8abe2-9470-4da3-9a2e-b0d73355d416","Type":"ContainerDied","Data":"270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd"} Jan 26 16:15:04 crc kubenswrapper[4896]: I0126 16:15:04.201236 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270135d0f79f765c7d9f3341341a85d86fab89e33ef1525dd25102458f8538dd" Jan 26 16:15:04 crc kubenswrapper[4896]: I0126 16:15:04.201278 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck" Jan 26 16:15:04 crc kubenswrapper[4896]: I0126 16:15:04.772435 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m"] Jan 26 16:15:04 crc kubenswrapper[4896]: I0126 16:15:04.786048 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-2zp5m"] Jan 26 16:15:06 crc kubenswrapper[4896]: I0126 16:15:06.778393 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298f103b-bf7b-40db-ace2-2780e91fde2c" path="/var/lib/kubelet/pods/298f103b-bf7b-40db-ace2-2780e91fde2c/volumes" Jan 26 16:15:09 crc kubenswrapper[4896]: I0126 16:15:09.853795 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:15:09 crc kubenswrapper[4896]: E0126 16:15:09.854344 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.756443 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:22 crc kubenswrapper[4896]: E0126 16:15:22.757411 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8abe2-9470-4da3-9a2e-b0d73355d416" containerName="collect-profiles" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.757423 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8abe2-9470-4da3-9a2e-b0d73355d416" containerName="collect-profiles" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.757682 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f8abe2-9470-4da3-9a2e-b0d73355d416" containerName="collect-profiles" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.782291 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.783426 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:15:22 crc kubenswrapper[4896]: E0126 16:15:22.783871 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.806221 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.817112 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.817726 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.817793 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.919394 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.919564 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.919774 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.920355 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.920368 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:22 crc kubenswrapper[4896]: I0126 16:15:22.943694 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn\") pod \"community-operators-rrswn\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:23 crc kubenswrapper[4896]: I0126 16:15:23.113921 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:23 crc kubenswrapper[4896]: I0126 16:15:23.699690 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:24 crc kubenswrapper[4896]: I0126 16:15:24.551214 4896 generic.go:334] "Generic (PLEG): container finished" podID="69f568f0-3460-4b8e-8ffa-1f73312e7696" containerID="2e1265f6833947ee388cf83a22b04f37c4e04734b8d3f62e82c103170cd93679" exitCode=0 Jan 26 16:15:24 crc kubenswrapper[4896]: I0126 16:15:24.551307 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" event={"ID":"69f568f0-3460-4b8e-8ffa-1f73312e7696","Type":"ContainerDied","Data":"2e1265f6833947ee388cf83a22b04f37c4e04734b8d3f62e82c103170cd93679"} Jan 26 16:15:24 crc kubenswrapper[4896]: I0126 16:15:24.553356 4896 generic.go:334] "Generic (PLEG): container finished" podID="15718be9-96a4-4d2b-8753-de19de81148c" containerID="53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e" exitCode=0 Jan 26 16:15:24 crc kubenswrapper[4896]: I0126 16:15:24.553386 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerDied","Data":"53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e"} Jan 26 16:15:24 crc kubenswrapper[4896]: I0126 16:15:24.553411 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerStarted","Data":"385fb16d42ad4d3cdfe3170620b894e39026862f33f2ec6cd388440e79af1512"} Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.092417 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.208675 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam\") pod \"69f568f0-3460-4b8e-8ffa-1f73312e7696\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.208844 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjhpj\" (UniqueName: \"kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj\") pod \"69f568f0-3460-4b8e-8ffa-1f73312e7696\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.208959 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory\") pod \"69f568f0-3460-4b8e-8ffa-1f73312e7696\" (UID: \"69f568f0-3460-4b8e-8ffa-1f73312e7696\") " Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.360500 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj" (OuterVolumeSpecName: "kube-api-access-vjhpj") pod "69f568f0-3460-4b8e-8ffa-1f73312e7696" (UID: "69f568f0-3460-4b8e-8ffa-1f73312e7696"). InnerVolumeSpecName "kube-api-access-vjhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.394356 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69f568f0-3460-4b8e-8ffa-1f73312e7696" (UID: "69f568f0-3460-4b8e-8ffa-1f73312e7696"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.395842 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory" (OuterVolumeSpecName: "inventory") pod "69f568f0-3460-4b8e-8ffa-1f73312e7696" (UID: "69f568f0-3460-4b8e-8ffa-1f73312e7696"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.417062 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.417146 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjhpj\" (UniqueName: \"kubernetes.io/projected/69f568f0-3460-4b8e-8ffa-1f73312e7696-kube-api-access-vjhpj\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.417166 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69f568f0-3460-4b8e-8ffa-1f73312e7696-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.576804 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" event={"ID":"69f568f0-3460-4b8e-8ffa-1f73312e7696","Type":"ContainerDied","Data":"d9f643552e42505dfe623d05245142c3ea862b1ea340c66dbfdd6814e357cd4e"} Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.577311 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f643552e42505dfe623d05245142c3ea862b1ea340c66dbfdd6814e357cd4e" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.576822 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2w2vj" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.579468 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerStarted","Data":"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b"} Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.669716 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs"] Jan 26 16:15:26 crc kubenswrapper[4896]: E0126 16:15:26.670424 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f568f0-3460-4b8e-8ffa-1f73312e7696" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.670455 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f568f0-3460-4b8e-8ffa-1f73312e7696" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.670823 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f568f0-3460-4b8e-8ffa-1f73312e7696" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.672023 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.676414 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.676834 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.677056 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.677225 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.707170 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs"] Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.724830 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.724921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.724966 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzjsp\" (UniqueName: \"kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.830181 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.830366 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.830482 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzjsp\" (UniqueName: \"kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.837385 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.859045 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.863808 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzjsp\" (UniqueName: \"kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:26 crc kubenswrapper[4896]: I0126 16:15:26.998344 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:15:27 crc kubenswrapper[4896]: I0126 16:15:27.574876 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs"] Jan 26 16:15:27 crc kubenswrapper[4896]: W0126 16:15:27.578991 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5d99662_063d_4731_8c5d_a805dc69e348.slice/crio-77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd WatchSource:0}: Error finding container 77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd: Status 404 returned error can't find the container with id 77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd Jan 26 16:15:27 crc kubenswrapper[4896]: I0126 16:15:27.595416 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" event={"ID":"b5d99662-063d-4731-8c5d-a805dc69e348","Type":"ContainerStarted","Data":"77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd"} Jan 26 16:15:27 crc kubenswrapper[4896]: I0126 16:15:27.598411 4896 generic.go:334] "Generic (PLEG): container finished" podID="15718be9-96a4-4d2b-8753-de19de81148c" containerID="c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b" exitCode=0 Jan 26 16:15:27 crc kubenswrapper[4896]: I0126 16:15:27.598455 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerDied","Data":"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b"} Jan 26 16:15:28 crc kubenswrapper[4896]: I0126 16:15:28.610542 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" event={"ID":"b5d99662-063d-4731-8c5d-a805dc69e348","Type":"ContainerStarted","Data":"1c54bd7a0bc523ef8773d75305ca12f1f39e1bbff9b63b705a70ad6d90aea349"} Jan 26 16:15:28 crc kubenswrapper[4896]: I0126 16:15:28.613949 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerStarted","Data":"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498"} Jan 26 16:15:28 crc kubenswrapper[4896]: I0126 16:15:28.634039 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" podStartSLOduration=2.101342472 podStartE2EDuration="2.634012922s" podCreationTimestamp="2026-01-26 16:15:26 +0000 UTC" firstStartedPulling="2026-01-26 16:15:27.581983304 +0000 UTC m=+2485.363863697" lastFinishedPulling="2026-01-26 16:15:28.114653754 +0000 UTC m=+2485.896534147" observedRunningTime="2026-01-26 16:15:28.627312491 +0000 UTC m=+2486.409192884" watchObservedRunningTime="2026-01-26 16:15:28.634012922 +0000 UTC m=+2486.415893335" Jan 26 16:15:28 crc kubenswrapper[4896]: I0126 16:15:28.658053 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrswn" podStartSLOduration=3.03199941 podStartE2EDuration="6.658031594s" podCreationTimestamp="2026-01-26 16:15:22 +0000 UTC" firstStartedPulling="2026-01-26 16:15:24.555813688 +0000 UTC m=+2482.337694081" lastFinishedPulling="2026-01-26 16:15:28.181845872 +0000 UTC m=+2485.963726265" observedRunningTime="2026-01-26 16:15:28.652008419 +0000 UTC m=+2486.433888832" watchObservedRunningTime="2026-01-26 16:15:28.658031594 +0000 UTC m=+2486.439911987" Jan 26 16:15:33 crc kubenswrapper[4896]: I0126 16:15:33.145191 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:33 crc kubenswrapper[4896]: I0126 16:15:33.145765 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:33 crc kubenswrapper[4896]: I0126 16:15:33.202028 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:34 crc kubenswrapper[4896]: I0126 16:15:34.233437 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:34 crc kubenswrapper[4896]: I0126 16:15:34.311305 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:35 crc kubenswrapper[4896]: I0126 16:15:35.765341 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:15:35 crc kubenswrapper[4896]: E0126 16:15:35.766316 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.201147 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rrswn" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="registry-server" containerID="cri-o://3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498" gracePeriod=2 Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.695410 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.836830 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities\") pod \"15718be9-96a4-4d2b-8753-de19de81148c\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.836876 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn\") pod \"15718be9-96a4-4d2b-8753-de19de81148c\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.837204 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content\") pod \"15718be9-96a4-4d2b-8753-de19de81148c\" (UID: \"15718be9-96a4-4d2b-8753-de19de81148c\") " Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.837847 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities" (OuterVolumeSpecName: "utilities") pod "15718be9-96a4-4d2b-8753-de19de81148c" (UID: "15718be9-96a4-4d2b-8753-de19de81148c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.846877 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn" (OuterVolumeSpecName: "kube-api-access-mr4sn") pod "15718be9-96a4-4d2b-8753-de19de81148c" (UID: "15718be9-96a4-4d2b-8753-de19de81148c"). InnerVolumeSpecName "kube-api-access-mr4sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.892945 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15718be9-96a4-4d2b-8753-de19de81148c" (UID: "15718be9-96a4-4d2b-8753-de19de81148c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.942013 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/15718be9-96a4-4d2b-8753-de19de81148c-kube-api-access-mr4sn\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.942053 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:36 crc kubenswrapper[4896]: I0126 16:15:36.942062 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15718be9-96a4-4d2b-8753-de19de81148c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.224020 4896 generic.go:334] "Generic (PLEG): container finished" podID="15718be9-96a4-4d2b-8753-de19de81148c" containerID="3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498" exitCode=0 Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.224083 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrswn" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.224100 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerDied","Data":"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498"} Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.224150 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrswn" event={"ID":"15718be9-96a4-4d2b-8753-de19de81148c","Type":"ContainerDied","Data":"385fb16d42ad4d3cdfe3170620b894e39026862f33f2ec6cd388440e79af1512"} Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.224174 4896 scope.go:117] "RemoveContainer" containerID="3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.266725 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.278794 4896 scope.go:117] "RemoveContainer" containerID="c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.279868 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rrswn"] Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.304645 4896 scope.go:117] "RemoveContainer" containerID="53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.353426 4896 scope.go:117] "RemoveContainer" containerID="3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498" Jan 26 16:15:37 crc kubenswrapper[4896]: E0126 16:15:37.354077 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498\": container with ID starting with 3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498 not found: ID does not exist" containerID="3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.354129 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498"} err="failed to get container status \"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498\": rpc error: code = NotFound desc = could not find container \"3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498\": container with ID starting with 3d0651dfbbe20bbc445e03dd2007101b8f0a09c3d82175435897926ca953d498 not found: ID does not exist" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.354164 4896 scope.go:117] "RemoveContainer" containerID="c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b" Jan 26 16:15:37 crc kubenswrapper[4896]: E0126 16:15:37.354563 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b\": container with ID starting with c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b not found: ID does not exist" containerID="c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.354613 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b"} err="failed to get container status \"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b\": rpc error: code = NotFound desc = could not find container \"c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b\": container with ID starting with c6408c5054ce3a2a798df7d433ca562c969424519aefb6fff56e16fec200480b not found: ID does not exist" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.354631 4896 scope.go:117] "RemoveContainer" containerID="53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e" Jan 26 16:15:37 crc kubenswrapper[4896]: E0126 16:15:37.355483 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e\": container with ID starting with 53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e not found: ID does not exist" containerID="53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e" Jan 26 16:15:37 crc kubenswrapper[4896]: I0126 16:15:37.355516 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e"} err="failed to get container status \"53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e\": rpc error: code = NotFound desc = could not find container \"53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e\": container with ID starting with 53d12133e25b0440da64873cfe49dc5e7c3b1cd4482818d79e30b4adafa5b75e not found: ID does not exist" Jan 26 16:15:38 crc kubenswrapper[4896]: I0126 16:15:38.778389 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15718be9-96a4-4d2b-8753-de19de81148c" path="/var/lib/kubelet/pods/15718be9-96a4-4d2b-8753-de19de81148c/volumes" Jan 26 16:15:40 crc kubenswrapper[4896]: I0126 16:15:40.928078 4896 scope.go:117] "RemoveContainer" containerID="dc06d91128c85ab2035f39222d157e79e096f8858b75878b1c1d81145392357a" Jan 26 16:15:46 crc kubenswrapper[4896]: I0126 16:15:46.760674 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:15:46 crc kubenswrapper[4896]: E0126 16:15:46.761984 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:15:57 crc kubenswrapper[4896]: I0126 16:15:57.759899 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:15:57 crc kubenswrapper[4896]: E0126 16:15:57.760759 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:16:11 crc kubenswrapper[4896]: I0126 16:16:11.760088 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:16:11 crc kubenswrapper[4896]: E0126 16:16:11.760861 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:16:21 crc kubenswrapper[4896]: I0126 16:16:21.783408 4896 generic.go:334] "Generic (PLEG): container finished" podID="b5d99662-063d-4731-8c5d-a805dc69e348" containerID="1c54bd7a0bc523ef8773d75305ca12f1f39e1bbff9b63b705a70ad6d90aea349" exitCode=0 Jan 26 16:16:21 crc kubenswrapper[4896]: I0126 16:16:21.783479 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" event={"ID":"b5d99662-063d-4731-8c5d-a805dc69e348","Type":"ContainerDied","Data":"1c54bd7a0bc523ef8773d75305ca12f1f39e1bbff9b63b705a70ad6d90aea349"} Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.803623 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" event={"ID":"b5d99662-063d-4731-8c5d-a805dc69e348","Type":"ContainerDied","Data":"77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd"} Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.804142 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ec8b832625da9f66e400f5680a4598ecdf43aaa2658c2f09a019206fdf8ecd" Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.811563 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.963141 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzjsp\" (UniqueName: \"kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp\") pod \"b5d99662-063d-4731-8c5d-a805dc69e348\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.963473 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory\") pod \"b5d99662-063d-4731-8c5d-a805dc69e348\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.963513 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam\") pod \"b5d99662-063d-4731-8c5d-a805dc69e348\" (UID: \"b5d99662-063d-4731-8c5d-a805dc69e348\") " Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.970302 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp" (OuterVolumeSpecName: "kube-api-access-mzjsp") pod "b5d99662-063d-4731-8c5d-a805dc69e348" (UID: "b5d99662-063d-4731-8c5d-a805dc69e348"). InnerVolumeSpecName "kube-api-access-mzjsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:16:23 crc kubenswrapper[4896]: I0126 16:16:23.993475 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5d99662-063d-4731-8c5d-a805dc69e348" (UID: "b5d99662-063d-4731-8c5d-a805dc69e348"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.011885 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory" (OuterVolumeSpecName: "inventory") pod "b5d99662-063d-4731-8c5d-a805dc69e348" (UID: "b5d99662-063d-4731-8c5d-a805dc69e348"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.068509 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.068930 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzjsp\" (UniqueName: \"kubernetes.io/projected/b5d99662-063d-4731-8c5d-a805dc69e348-kube-api-access-mzjsp\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.069046 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5d99662-063d-4731-8c5d-a805dc69e348-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.816281 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.947629 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sf5rj"] Jan 26 16:16:24 crc kubenswrapper[4896]: E0126 16:16:24.948671 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="registry-server" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.948690 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="registry-server" Jan 26 16:16:24 crc kubenswrapper[4896]: E0126 16:16:24.948738 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d99662-063d-4731-8c5d-a805dc69e348" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.948747 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d99662-063d-4731-8c5d-a805dc69e348" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:24 crc kubenswrapper[4896]: E0126 16:16:24.948772 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="extract-utilities" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.948779 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="extract-utilities" Jan 26 16:16:24 crc kubenswrapper[4896]: E0126 16:16:24.948785 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="extract-content" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.948791 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="extract-content" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.949194 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="15718be9-96a4-4d2b-8753-de19de81148c" containerName="registry-server" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.949246 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d99662-063d-4731-8c5d-a805dc69e348" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.951118 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.954823 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.954892 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.955279 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.955322 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:16:24 crc kubenswrapper[4896]: I0126 16:16:24.970038 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sf5rj"] Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.043941 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-csgwp"] Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.055472 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-csgwp"] Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.092728 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.092773 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcrw\" (UniqueName: \"kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.092830 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.196099 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.196144 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfcrw\" (UniqueName: \"kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.196202 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.202029 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.205347 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.222712 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfcrw\" (UniqueName: \"kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw\") pod \"ssh-known-hosts-edpm-deployment-sf5rj\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.293404 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.761246 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:16:25 crc kubenswrapper[4896]: E0126 16:16:25.761811 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:16:25 crc kubenswrapper[4896]: I0126 16:16:25.914220 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sf5rj"] Jan 26 16:16:26 crc kubenswrapper[4896]: I0126 16:16:26.773848 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609ce882-8e94-4cbc-badf-fed5a521ec43" path="/var/lib/kubelet/pods/609ce882-8e94-4cbc-badf-fed5a521ec43/volumes" Jan 26 16:16:26 crc kubenswrapper[4896]: I0126 16:16:26.838058 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" event={"ID":"682cfce4-854f-4f10-99fd-a92236ede1fb","Type":"ContainerStarted","Data":"e080e893200da1c3554847bf53ae624d8b1ac823e7d81a88fa9e059c2a167019"} Jan 26 16:16:26 crc kubenswrapper[4896]: I0126 16:16:26.838102 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" event={"ID":"682cfce4-854f-4f10-99fd-a92236ede1fb","Type":"ContainerStarted","Data":"1d06e5a589f905f9b4f00ce3949236075940293e9adaf0046cad22b73152eb0b"} Jan 26 16:16:26 crc kubenswrapper[4896]: I0126 16:16:26.858025 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" podStartSLOduration=2.391434673 podStartE2EDuration="2.857988115s" podCreationTimestamp="2026-01-26 16:16:24 +0000 UTC" firstStartedPulling="2026-01-26 16:16:25.926003657 +0000 UTC m=+2543.707884050" lastFinishedPulling="2026-01-26 16:16:26.392557089 +0000 UTC m=+2544.174437492" observedRunningTime="2026-01-26 16:16:26.855600977 +0000 UTC m=+2544.637481400" watchObservedRunningTime="2026-01-26 16:16:26.857988115 +0000 UTC m=+2544.639868518" Jan 26 16:16:33 crc kubenswrapper[4896]: I0126 16:16:33.920071 4896 generic.go:334] "Generic (PLEG): container finished" podID="682cfce4-854f-4f10-99fd-a92236ede1fb" containerID="e080e893200da1c3554847bf53ae624d8b1ac823e7d81a88fa9e059c2a167019" exitCode=0 Jan 26 16:16:33 crc kubenswrapper[4896]: I0126 16:16:33.920622 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" event={"ID":"682cfce4-854f-4f10-99fd-a92236ede1fb","Type":"ContainerDied","Data":"e080e893200da1c3554847bf53ae624d8b1ac823e7d81a88fa9e059c2a167019"} Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.474977 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.510448 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam\") pod \"682cfce4-854f-4f10-99fd-a92236ede1fb\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.511021 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfcrw\" (UniqueName: \"kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw\") pod \"682cfce4-854f-4f10-99fd-a92236ede1fb\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.511128 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0\") pod \"682cfce4-854f-4f10-99fd-a92236ede1fb\" (UID: \"682cfce4-854f-4f10-99fd-a92236ede1fb\") " Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.528980 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw" (OuterVolumeSpecName: "kube-api-access-qfcrw") pod "682cfce4-854f-4f10-99fd-a92236ede1fb" (UID: "682cfce4-854f-4f10-99fd-a92236ede1fb"). InnerVolumeSpecName "kube-api-access-qfcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.544158 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "682cfce4-854f-4f10-99fd-a92236ede1fb" (UID: "682cfce4-854f-4f10-99fd-a92236ede1fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.551982 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "682cfce4-854f-4f10-99fd-a92236ede1fb" (UID: "682cfce4-854f-4f10-99fd-a92236ede1fb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.614839 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.614884 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfcrw\" (UniqueName: \"kubernetes.io/projected/682cfce4-854f-4f10-99fd-a92236ede1fb-kube-api-access-qfcrw\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.614894 4896 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/682cfce4-854f-4f10-99fd-a92236ede1fb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.944796 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" event={"ID":"682cfce4-854f-4f10-99fd-a92236ede1fb","Type":"ContainerDied","Data":"1d06e5a589f905f9b4f00ce3949236075940293e9adaf0046cad22b73152eb0b"} Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.944842 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d06e5a589f905f9b4f00ce3949236075940293e9adaf0046cad22b73152eb0b" Jan 26 16:16:35 crc kubenswrapper[4896]: I0126 16:16:35.944906 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sf5rj" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.032078 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx"] Jan 26 16:16:36 crc kubenswrapper[4896]: E0126 16:16:36.032719 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682cfce4-854f-4f10-99fd-a92236ede1fb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.032740 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="682cfce4-854f-4f10-99fd-a92236ede1fb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.033019 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="682cfce4-854f-4f10-99fd-a92236ede1fb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.034006 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.036643 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.036701 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.037034 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.037621 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.044564 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx"] Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.126565 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrq6\" (UniqueName: \"kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.126639 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.126687 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.229251 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrq6\" (UniqueName: \"kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.229313 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.229364 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.232963 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.234531 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.251479 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrq6\" (UniqueName: \"kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f9vsx\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.354952 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.884880 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx"] Jan 26 16:16:36 crc kubenswrapper[4896]: I0126 16:16:36.957041 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" event={"ID":"f135c2c3-6301-42b2-a4f6-134b93bd65be","Type":"ContainerStarted","Data":"5df0677ded49d2c983534f96c7ab3455194c71f01179b2da25bb6c47f4555bca"} Jan 26 16:16:37 crc kubenswrapper[4896]: I0126 16:16:37.967898 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" event={"ID":"f135c2c3-6301-42b2-a4f6-134b93bd65be","Type":"ContainerStarted","Data":"c671813b8e4eb4e89abc56b306b60375cf5e44266bd0a9ad241f63cbbfacdf2a"} Jan 26 16:16:37 crc kubenswrapper[4896]: I0126 16:16:37.991021 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" podStartSLOduration=1.378781268 podStartE2EDuration="1.990998184s" podCreationTimestamp="2026-01-26 16:16:36 +0000 UTC" firstStartedPulling="2026-01-26 16:16:36.887288278 +0000 UTC m=+2554.669168671" lastFinishedPulling="2026-01-26 16:16:37.499505194 +0000 UTC m=+2555.281385587" observedRunningTime="2026-01-26 16:16:37.990842921 +0000 UTC m=+2555.772723324" watchObservedRunningTime="2026-01-26 16:16:37.990998184 +0000 UTC m=+2555.772878577" Jan 26 16:16:39 crc kubenswrapper[4896]: I0126 16:16:39.760017 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:16:39 crc kubenswrapper[4896]: E0126 16:16:39.760699 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:16:41 crc kubenswrapper[4896]: I0126 16:16:41.027602 4896 scope.go:117] "RemoveContainer" containerID="df0242c0571a52e1118aa4947dc008443830b84c4f173c9e8a1f80f639212b1d" Jan 26 16:16:46 crc kubenswrapper[4896]: I0126 16:16:46.063959 4896 generic.go:334] "Generic (PLEG): container finished" podID="f135c2c3-6301-42b2-a4f6-134b93bd65be" containerID="c671813b8e4eb4e89abc56b306b60375cf5e44266bd0a9ad241f63cbbfacdf2a" exitCode=0 Jan 26 16:16:46 crc kubenswrapper[4896]: I0126 16:16:46.064234 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" event={"ID":"f135c2c3-6301-42b2-a4f6-134b93bd65be","Type":"ContainerDied","Data":"c671813b8e4eb4e89abc56b306b60375cf5e44266bd0a9ad241f63cbbfacdf2a"} Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.655397 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.757522 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam\") pod \"f135c2c3-6301-42b2-a4f6-134b93bd65be\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.757798 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory\") pod \"f135c2c3-6301-42b2-a4f6-134b93bd65be\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.757907 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrq6\" (UniqueName: \"kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6\") pod \"f135c2c3-6301-42b2-a4f6-134b93bd65be\" (UID: \"f135c2c3-6301-42b2-a4f6-134b93bd65be\") " Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.763841 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6" (OuterVolumeSpecName: "kube-api-access-jtrq6") pod "f135c2c3-6301-42b2-a4f6-134b93bd65be" (UID: "f135c2c3-6301-42b2-a4f6-134b93bd65be"). InnerVolumeSpecName "kube-api-access-jtrq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.790724 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f135c2c3-6301-42b2-a4f6-134b93bd65be" (UID: "f135c2c3-6301-42b2-a4f6-134b93bd65be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.803045 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory" (OuterVolumeSpecName: "inventory") pod "f135c2c3-6301-42b2-a4f6-134b93bd65be" (UID: "f135c2c3-6301-42b2-a4f6-134b93bd65be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.864256 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.864563 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f135c2c3-6301-42b2-a4f6-134b93bd65be-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:47 crc kubenswrapper[4896]: I0126 16:16:47.864595 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrq6\" (UniqueName: \"kubernetes.io/projected/f135c2c3-6301-42b2-a4f6-134b93bd65be-kube-api-access-jtrq6\") on node \"crc\" DevicePath \"\"" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.090781 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" event={"ID":"f135c2c3-6301-42b2-a4f6-134b93bd65be","Type":"ContainerDied","Data":"5df0677ded49d2c983534f96c7ab3455194c71f01179b2da25bb6c47f4555bca"} Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.090839 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df0677ded49d2c983534f96c7ab3455194c71f01179b2da25bb6c47f4555bca" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.090893 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f9vsx" Jan 26 16:16:48 crc kubenswrapper[4896]: E0126 16:16:48.105034 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135c2c3_6301_42b2_a4f6_134b93bd65be.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.226929 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4"] Jan 26 16:16:48 crc kubenswrapper[4896]: E0126 16:16:48.227615 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f135c2c3-6301-42b2-a4f6-134b93bd65be" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.227638 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f135c2c3-6301-42b2-a4f6-134b93bd65be" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.227985 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f135c2c3-6301-42b2-a4f6-134b93bd65be" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.229148 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.233028 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.233122 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.233122 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.233992 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.243820 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4"] Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.379985 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.380124 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.380270 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrh7\" (UniqueName: \"kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.482923 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.482979 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.483036 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrh7\" (UniqueName: \"kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.493352 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.494333 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.504067 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrh7\" (UniqueName: \"kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:48 crc kubenswrapper[4896]: I0126 16:16:48.549739 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:16:49 crc kubenswrapper[4896]: I0126 16:16:49.249033 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4"] Jan 26 16:16:50 crc kubenswrapper[4896]: I0126 16:16:50.117708 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" event={"ID":"92b2a894-7665-4c40-b5b2-94e4387b95c5","Type":"ContainerStarted","Data":"13ea9c0e3c3315ba2745dd78472b234716975591f5807127fc2b1b653d469497"} Jan 26 16:16:51 crc kubenswrapper[4896]: I0126 16:16:51.130370 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" event={"ID":"92b2a894-7665-4c40-b5b2-94e4387b95c5","Type":"ContainerStarted","Data":"eb20ebcfd71760a2988e1025967c123d97e8579971ac17ea5a52574ce4cb47ef"} Jan 26 16:16:51 crc kubenswrapper[4896]: I0126 16:16:51.151999 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" podStartSLOduration=2.589970311 podStartE2EDuration="3.151976593s" podCreationTimestamp="2026-01-26 16:16:48 +0000 UTC" firstStartedPulling="2026-01-26 16:16:49.257558118 +0000 UTC m=+2567.039438501" lastFinishedPulling="2026-01-26 16:16:49.81956439 +0000 UTC m=+2567.601444783" observedRunningTime="2026-01-26 16:16:51.145452936 +0000 UTC m=+2568.927333339" watchObservedRunningTime="2026-01-26 16:16:51.151976593 +0000 UTC m=+2568.933856996" Jan 26 16:16:51 crc kubenswrapper[4896]: I0126 16:16:51.762025 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:16:51 crc kubenswrapper[4896]: E0126 16:16:51.762433 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:01 crc kubenswrapper[4896]: I0126 16:17:01.252707 4896 generic.go:334] "Generic (PLEG): container finished" podID="92b2a894-7665-4c40-b5b2-94e4387b95c5" containerID="eb20ebcfd71760a2988e1025967c123d97e8579971ac17ea5a52574ce4cb47ef" exitCode=0 Jan 26 16:17:01 crc kubenswrapper[4896]: I0126 16:17:01.252814 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" event={"ID":"92b2a894-7665-4c40-b5b2-94e4387b95c5","Type":"ContainerDied","Data":"eb20ebcfd71760a2988e1025967c123d97e8579971ac17ea5a52574ce4cb47ef"} Jan 26 16:17:02 crc kubenswrapper[4896]: I0126 16:17:02.888959 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:17:02 crc kubenswrapper[4896]: I0126 16:17:02.997124 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory\") pod \"92b2a894-7665-4c40-b5b2-94e4387b95c5\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " Jan 26 16:17:02 crc kubenswrapper[4896]: I0126 16:17:02.997245 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam\") pod \"92b2a894-7665-4c40-b5b2-94e4387b95c5\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " Jan 26 16:17:02 crc kubenswrapper[4896]: I0126 16:17:02.997363 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngrh7\" (UniqueName: \"kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7\") pod \"92b2a894-7665-4c40-b5b2-94e4387b95c5\" (UID: \"92b2a894-7665-4c40-b5b2-94e4387b95c5\") " Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.003139 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7" (OuterVolumeSpecName: "kube-api-access-ngrh7") pod "92b2a894-7665-4c40-b5b2-94e4387b95c5" (UID: "92b2a894-7665-4c40-b5b2-94e4387b95c5"). InnerVolumeSpecName "kube-api-access-ngrh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.030706 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92b2a894-7665-4c40-b5b2-94e4387b95c5" (UID: "92b2a894-7665-4c40-b5b2-94e4387b95c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.032866 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory" (OuterVolumeSpecName: "inventory") pod "92b2a894-7665-4c40-b5b2-94e4387b95c5" (UID: "92b2a894-7665-4c40-b5b2-94e4387b95c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.100272 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.100317 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngrh7\" (UniqueName: \"kubernetes.io/projected/92b2a894-7665-4c40-b5b2-94e4387b95c5-kube-api-access-ngrh7\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.100327 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b2a894-7665-4c40-b5b2-94e4387b95c5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.307509 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" event={"ID":"92b2a894-7665-4c40-b5b2-94e4387b95c5","Type":"ContainerDied","Data":"13ea9c0e3c3315ba2745dd78472b234716975591f5807127fc2b1b653d469497"} Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.307563 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13ea9c0e3c3315ba2745dd78472b234716975591f5807127fc2b1b653d469497" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.307652 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.402518 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx"] Jan 26 16:17:03 crc kubenswrapper[4896]: E0126 16:17:03.403403 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b2a894-7665-4c40-b5b2-94e4387b95c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.403474 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a894-7665-4c40-b5b2-94e4387b95c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.403824 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a894-7665-4c40-b5b2-94e4387b95c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.404989 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409011 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409340 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409574 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409780 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409808 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.409949 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.410038 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.410071 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.412512 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.418443 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx"] Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.510229 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.510317 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.510348 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.510389 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.511717 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.511815 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.511854 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.511910 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.511954 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512028 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512161 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512299 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5dn\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512336 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512428 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512490 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.512677 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.614989 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615063 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615127 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615190 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615265 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615349 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615413 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5dn\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615792 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615907 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.615959 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616026 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616507 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616634 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616661 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616705 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.616750 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.619947 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.621162 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.621509 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.621789 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.622730 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.622739 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.623313 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.623508 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.624538 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.624521 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.626899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.627236 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.628121 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.631015 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.631187 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.633107 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5dn\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59tbx\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.759558 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:17:03 crc kubenswrapper[4896]: E0126 16:17:03.759997 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:03 crc kubenswrapper[4896]: I0126 16:17:03.814114 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:04 crc kubenswrapper[4896]: W0126 16:17:04.408196 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd3d33aa_67c7_4ba6_93ec_5ba14b9b593a.slice/crio-3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52 WatchSource:0}: Error finding container 3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52: Status 404 returned error can't find the container with id 3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52 Jan 26 16:17:04 crc kubenswrapper[4896]: I0126 16:17:04.409609 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx"] Jan 26 16:17:05 crc kubenswrapper[4896]: I0126 16:17:05.357228 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" event={"ID":"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a","Type":"ContainerStarted","Data":"3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52"} Jan 26 16:17:06 crc kubenswrapper[4896]: I0126 16:17:06.368455 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" event={"ID":"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a","Type":"ContainerStarted","Data":"4f6a9004705272040cdc5617faef1277357f862ec174c5762925160265a38b71"} Jan 26 16:17:11 crc kubenswrapper[4896]: I0126 16:17:11.039180 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" podStartSLOduration=6.953046368 podStartE2EDuration="8.039155083s" podCreationTimestamp="2026-01-26 16:17:03 +0000 UTC" firstStartedPulling="2026-01-26 16:17:04.414237012 +0000 UTC m=+2582.196117405" lastFinishedPulling="2026-01-26 16:17:05.500345727 +0000 UTC m=+2583.282226120" observedRunningTime="2026-01-26 16:17:06.399606169 +0000 UTC m=+2584.181486562" watchObservedRunningTime="2026-01-26 16:17:11.039155083 +0000 UTC m=+2588.821035476" Jan 26 16:17:11 crc kubenswrapper[4896]: I0126 16:17:11.046256 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-n4vfl"] Jan 26 16:17:11 crc kubenswrapper[4896]: I0126 16:17:11.057344 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-n4vfl"] Jan 26 16:17:12 crc kubenswrapper[4896]: I0126 16:17:12.772470 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8bfeeb-3335-42ef-8d6b-42a35ec463df" path="/var/lib/kubelet/pods/9f8bfeeb-3335-42ef-8d6b-42a35ec463df/volumes" Jan 26 16:17:17 crc kubenswrapper[4896]: I0126 16:17:17.759796 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:17:17 crc kubenswrapper[4896]: E0126 16:17:17.760812 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:31 crc kubenswrapper[4896]: I0126 16:17:31.760059 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:17:31 crc kubenswrapper[4896]: E0126 16:17:31.761017 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:41 crc kubenswrapper[4896]: I0126 16:17:41.103208 4896 scope.go:117] "RemoveContainer" containerID="e9da0176df8af3ef3c279e5d8979759ac705154689126369592e469aa3474092" Jan 26 16:17:46 crc kubenswrapper[4896]: I0126 16:17:46.760006 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:17:46 crc kubenswrapper[4896]: E0126 16:17:46.760911 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:55 crc kubenswrapper[4896]: I0126 16:17:55.921470 4896 generic.go:334] "Generic (PLEG): container finished" podID="bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" containerID="4f6a9004705272040cdc5617faef1277357f862ec174c5762925160265a38b71" exitCode=0 Jan 26 16:17:55 crc kubenswrapper[4896]: I0126 16:17:55.922020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" event={"ID":"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a","Type":"ContainerDied","Data":"4f6a9004705272040cdc5617faef1277357f862ec174c5762925160265a38b71"} Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.911688 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.959878 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.959957 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960000 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr5dn\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960049 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960091 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960136 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960166 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960202 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960275 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960303 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960335 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960365 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960416 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960441 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960466 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.960492 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.964175 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" event={"ID":"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a","Type":"ContainerDied","Data":"3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52"} Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.964242 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3448e8d6c1254c4184e86f99a7a3e8f3e1351b1c601d882ea8cac9de9c48cf52" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.964334 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59tbx" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.972311 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.972847 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.974321 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.979852 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn" (OuterVolumeSpecName: "kube-api-access-sr5dn") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "kube-api-access-sr5dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.990085 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.990605 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.990697 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:57 crc kubenswrapper[4896]: I0126 16:17:57.997745 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.002846 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.004741 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.004994 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.005212 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.007462 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.009164 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: E0126 16:17:58.046060 4896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam podName:bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a nodeName:}" failed. No retries permitted until 2026-01-26 16:17:58.546020882 +0000 UTC m=+2636.327901275 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a") : error deleting /var/lib/kubelet/pods/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a/volume-subpaths: remove /var/lib/kubelet/pods/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a/volume-subpaths: no such file or directory Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.051806 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory" (OuterVolumeSpecName: "inventory") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.069948 4896 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.069991 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070007 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr5dn\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-kube-api-access-sr5dn\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070022 4896 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070037 4896 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070051 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070065 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070078 4896 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070090 4896 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070100 4896 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070111 4896 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070123 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070144 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070161 4896 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.070174 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.101387 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2"] Jan 26 16:17:58 crc kubenswrapper[4896]: E0126 16:17:58.102124 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.102149 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.102475 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.103474 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.107430 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.124231 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2"] Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.173192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.173676 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8bh\" (UniqueName: \"kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.173708 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.174207 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.174266 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.276282 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.276615 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.276831 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.277010 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8bh\" (UniqueName: \"kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.277199 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.277354 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.280892 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.280915 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.282048 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.296787 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8bh\" (UniqueName: \"kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8mjp2\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.494927 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.584519 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") pod \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\" (UID: \"bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a\") " Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.597195 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a" (UID: "bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:17:58 crc kubenswrapper[4896]: I0126 16:17:58.687423 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:17:59 crc kubenswrapper[4896]: I0126 16:17:59.109031 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2"] Jan 26 16:17:59 crc kubenswrapper[4896]: I0126 16:17:59.760026 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:17:59 crc kubenswrapper[4896]: E0126 16:17:59.760638 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:17:59 crc kubenswrapper[4896]: I0126 16:17:59.998065 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" event={"ID":"7f0fea8f-566a-45a2-99fd-89c389143121","Type":"ContainerStarted","Data":"7354e8cc34fa07ab3ef026d0bfb00386f187c4e8121385a465774f3de92f23c7"} Jan 26 16:18:01 crc kubenswrapper[4896]: I0126 16:18:01.009983 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" event={"ID":"7f0fea8f-566a-45a2-99fd-89c389143121","Type":"ContainerStarted","Data":"42a76cda8ebdc7e4607e5882b1918e73f711c3f23dfbc19c695df13946a1ba64"} Jan 26 16:18:01 crc kubenswrapper[4896]: I0126 16:18:01.031836 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" podStartSLOduration=2.290346832 podStartE2EDuration="3.031807454s" podCreationTimestamp="2026-01-26 16:17:58 +0000 UTC" firstStartedPulling="2026-01-26 16:17:59.119369584 +0000 UTC m=+2636.901249977" lastFinishedPulling="2026-01-26 16:17:59.860830206 +0000 UTC m=+2637.642710599" observedRunningTime="2026-01-26 16:18:01.027399949 +0000 UTC m=+2638.809280362" watchObservedRunningTime="2026-01-26 16:18:01.031807454 +0000 UTC m=+2638.813687847" Jan 26 16:18:12 crc kubenswrapper[4896]: I0126 16:18:12.769131 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:18:12 crc kubenswrapper[4896]: E0126 16:18:12.770111 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:18:24 crc kubenswrapper[4896]: I0126 16:18:24.759363 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:18:24 crc kubenswrapper[4896]: E0126 16:18:24.760173 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:18:39 crc kubenswrapper[4896]: I0126 16:18:39.759916 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:18:39 crc kubenswrapper[4896]: E0126 16:18:39.760994 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:18:54 crc kubenswrapper[4896]: I0126 16:18:54.759812 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:18:54 crc kubenswrapper[4896]: E0126 16:18:54.761700 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:19:06 crc kubenswrapper[4896]: I0126 16:19:06.760088 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:19:06 crc kubenswrapper[4896]: E0126 16:19:06.761243 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:19:08 crc kubenswrapper[4896]: I0126 16:19:08.115367 4896 generic.go:334] "Generic (PLEG): container finished" podID="7f0fea8f-566a-45a2-99fd-89c389143121" containerID="42a76cda8ebdc7e4607e5882b1918e73f711c3f23dfbc19c695df13946a1ba64" exitCode=0 Jan 26 16:19:08 crc kubenswrapper[4896]: I0126 16:19:08.115544 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" event={"ID":"7f0fea8f-566a-45a2-99fd-89c389143121","Type":"ContainerDied","Data":"42a76cda8ebdc7e4607e5882b1918e73f711c3f23dfbc19c695df13946a1ba64"} Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.653978 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.750659 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory\") pod \"7f0fea8f-566a-45a2-99fd-89c389143121\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.751227 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0\") pod \"7f0fea8f-566a-45a2-99fd-89c389143121\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.751374 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk8bh\" (UniqueName: \"kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh\") pod \"7f0fea8f-566a-45a2-99fd-89c389143121\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.751421 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam\") pod \"7f0fea8f-566a-45a2-99fd-89c389143121\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.751569 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle\") pod \"7f0fea8f-566a-45a2-99fd-89c389143121\" (UID: \"7f0fea8f-566a-45a2-99fd-89c389143121\") " Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.758364 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f0fea8f-566a-45a2-99fd-89c389143121" (UID: "7f0fea8f-566a-45a2-99fd-89c389143121"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.758858 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh" (OuterVolumeSpecName: "kube-api-access-fk8bh") pod "7f0fea8f-566a-45a2-99fd-89c389143121" (UID: "7f0fea8f-566a-45a2-99fd-89c389143121"). InnerVolumeSpecName "kube-api-access-fk8bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.781700 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7f0fea8f-566a-45a2-99fd-89c389143121" (UID: "7f0fea8f-566a-45a2-99fd-89c389143121"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.787287 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f0fea8f-566a-45a2-99fd-89c389143121" (UID: "7f0fea8f-566a-45a2-99fd-89c389143121"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.787675 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory" (OuterVolumeSpecName: "inventory") pod "7f0fea8f-566a-45a2-99fd-89c389143121" (UID: "7f0fea8f-566a-45a2-99fd-89c389143121"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.855735 4896 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f0fea8f-566a-45a2-99fd-89c389143121-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.855778 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk8bh\" (UniqueName: \"kubernetes.io/projected/7f0fea8f-566a-45a2-99fd-89c389143121-kube-api-access-fk8bh\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.855791 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.855807 4896 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:09 crc kubenswrapper[4896]: I0126 16:19:09.855821 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fea8f-566a-45a2-99fd-89c389143121-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.140732 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" event={"ID":"7f0fea8f-566a-45a2-99fd-89c389143121","Type":"ContainerDied","Data":"7354e8cc34fa07ab3ef026d0bfb00386f187c4e8121385a465774f3de92f23c7"} Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.140775 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7354e8cc34fa07ab3ef026d0bfb00386f187c4e8121385a465774f3de92f23c7" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.141153 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8mjp2" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.230572 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm"] Jan 26 16:19:10 crc kubenswrapper[4896]: E0126 16:19:10.231288 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0fea8f-566a-45a2-99fd-89c389143121" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.231315 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0fea8f-566a-45a2-99fd-89c389143121" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.231711 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0fea8f-566a-45a2-99fd-89c389143121" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.232879 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.235366 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.235541 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.236477 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.236691 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.238150 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.238328 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.245269 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm"] Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.266674 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.266954 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.267087 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.267356 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.267454 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdf44\" (UniqueName: \"kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.267627 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372110 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372176 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdf44\" (UniqueName: \"kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372261 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372437 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372476 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.372521 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.376899 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.377991 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.377960 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.378060 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.379610 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.403660 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdf44\" (UniqueName: \"kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:10 crc kubenswrapper[4896]: I0126 16:19:10.552285 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:19:11 crc kubenswrapper[4896]: I0126 16:19:11.140618 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm"] Jan 26 16:19:11 crc kubenswrapper[4896]: I0126 16:19:11.148709 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:19:11 crc kubenswrapper[4896]: I0126 16:19:11.158517 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" event={"ID":"619ecbfe-4f75-416b-bbb1-01b8470e5115","Type":"ContainerStarted","Data":"06db69a85be526f65d92e1889de67416d621ea389bd0dc4975f4b2a0c4967e6d"} Jan 26 16:19:12 crc kubenswrapper[4896]: I0126 16:19:12.170805 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" event={"ID":"619ecbfe-4f75-416b-bbb1-01b8470e5115","Type":"ContainerStarted","Data":"a3d71216efeb1c108a1a70375cd159d45dbbf231ae1c492f51036d16e9df59c4"} Jan 26 16:19:12 crc kubenswrapper[4896]: I0126 16:19:12.195289 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" podStartSLOduration=1.706794696 podStartE2EDuration="2.195270773s" podCreationTimestamp="2026-01-26 16:19:10 +0000 UTC" firstStartedPulling="2026-01-26 16:19:11.148345783 +0000 UTC m=+2708.930226176" lastFinishedPulling="2026-01-26 16:19:11.63682186 +0000 UTC m=+2709.418702253" observedRunningTime="2026-01-26 16:19:12.193651025 +0000 UTC m=+2709.975531418" watchObservedRunningTime="2026-01-26 16:19:12.195270773 +0000 UTC m=+2709.977151166" Jan 26 16:19:21 crc kubenswrapper[4896]: I0126 16:19:21.759643 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:19:22 crc kubenswrapper[4896]: I0126 16:19:22.278454 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc"} Jan 26 16:20:04 crc kubenswrapper[4896]: I0126 16:20:04.041191 4896 generic.go:334] "Generic (PLEG): container finished" podID="619ecbfe-4f75-416b-bbb1-01b8470e5115" containerID="a3d71216efeb1c108a1a70375cd159d45dbbf231ae1c492f51036d16e9df59c4" exitCode=0 Jan 26 16:20:04 crc kubenswrapper[4896]: I0126 16:20:04.041257 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" event={"ID":"619ecbfe-4f75-416b-bbb1-01b8470e5115","Type":"ContainerDied","Data":"a3d71216efeb1c108a1a70375cd159d45dbbf231ae1c492f51036d16e9df59c4"} Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.654791 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.809505 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.810018 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.810103 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.810133 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.810212 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.810402 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdf44\" (UniqueName: \"kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44\") pod \"619ecbfe-4f75-416b-bbb1-01b8470e5115\" (UID: \"619ecbfe-4f75-416b-bbb1-01b8470e5115\") " Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.817884 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.833958 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44" (OuterVolumeSpecName: "kube-api-access-jdf44") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "kube-api-access-jdf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.861545 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.863744 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.868859 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory" (OuterVolumeSpecName: "inventory") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.873081 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "619ecbfe-4f75-416b-bbb1-01b8470e5115" (UID: "619ecbfe-4f75-416b-bbb1-01b8470e5115"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912855 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912887 4896 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912901 4896 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912911 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdf44\" (UniqueName: \"kubernetes.io/projected/619ecbfe-4f75-416b-bbb1-01b8470e5115-kube-api-access-jdf44\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912942 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:05 crc kubenswrapper[4896]: I0126 16:20:05.912954 4896 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619ecbfe-4f75-416b-bbb1-01b8470e5115-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.069821 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" event={"ID":"619ecbfe-4f75-416b-bbb1-01b8470e5115","Type":"ContainerDied","Data":"06db69a85be526f65d92e1889de67416d621ea389bd0dc4975f4b2a0c4967e6d"} Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.069867 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06db69a85be526f65d92e1889de67416d621ea389bd0dc4975f4b2a0c4967e6d" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.069948 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.258918 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st"] Jan 26 16:20:06 crc kubenswrapper[4896]: E0126 16:20:06.259677 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619ecbfe-4f75-416b-bbb1-01b8470e5115" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.259707 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="619ecbfe-4f75-416b-bbb1-01b8470e5115" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.260034 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="619ecbfe-4f75-416b-bbb1-01b8470e5115" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.261284 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.265930 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.265930 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.265987 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.273621 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.276284 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st"] Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.278010 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.339106 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.339237 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jjz\" (UniqueName: \"kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.339275 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.339304 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.339402 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.441636 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jjz\" (UniqueName: \"kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.441695 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.441727 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.441825 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.441866 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.447560 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.448988 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.449371 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.454354 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.470329 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jjz\" (UniqueName: \"kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5q8st\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:06 crc kubenswrapper[4896]: I0126 16:20:06.581456 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:20:07 crc kubenswrapper[4896]: I0126 16:20:07.161322 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st"] Jan 26 16:20:08 crc kubenswrapper[4896]: I0126 16:20:08.097382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" event={"ID":"019cb762-55d7-4c9d-a425-fde89665ac76","Type":"ContainerStarted","Data":"59f8862bf8f05cd7293c67d57880bdfa48dca56ac63181e91fcedd6e3a453aeb"} Jan 26 16:20:08 crc kubenswrapper[4896]: I0126 16:20:08.097702 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" event={"ID":"019cb762-55d7-4c9d-a425-fde89665ac76","Type":"ContainerStarted","Data":"4e4780e63128f5d1756eb1aa1a53dd88037f3d2576d39d9a932e417475ed1a00"} Jan 26 16:20:08 crc kubenswrapper[4896]: I0126 16:20:08.118707 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" podStartSLOduration=1.70412682 podStartE2EDuration="2.118689283s" podCreationTimestamp="2026-01-26 16:20:06 +0000 UTC" firstStartedPulling="2026-01-26 16:20:07.169565842 +0000 UTC m=+2764.951446235" lastFinishedPulling="2026-01-26 16:20:07.584128305 +0000 UTC m=+2765.366008698" observedRunningTime="2026-01-26 16:20:08.113223231 +0000 UTC m=+2765.895103624" watchObservedRunningTime="2026-01-26 16:20:08.118689283 +0000 UTC m=+2765.900569676" Jan 26 16:20:17 crc kubenswrapper[4896]: I0126 16:20:17.901813 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:17 crc kubenswrapper[4896]: I0126 16:20:17.906859 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:17 crc kubenswrapper[4896]: I0126 16:20:17.925255 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.061667 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.061747 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.061843 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k598\" (UniqueName: \"kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.165218 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.165327 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.165356 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k598\" (UniqueName: \"kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.165938 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.165938 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.197485 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k598\" (UniqueName: \"kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598\") pod \"redhat-operators-6k2cx\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.240522 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:18 crc kubenswrapper[4896]: I0126 16:20:18.786000 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:19 crc kubenswrapper[4896]: I0126 16:20:19.284975 4896 generic.go:334] "Generic (PLEG): container finished" podID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerID="e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110" exitCode=0 Jan 26 16:20:19 crc kubenswrapper[4896]: I0126 16:20:19.285027 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerDied","Data":"e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110"} Jan 26 16:20:19 crc kubenswrapper[4896]: I0126 16:20:19.285354 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerStarted","Data":"1a7c4b8c21f5bc3120373af5c56b65f3650549cf9b31fe1c76903ef800b7505e"} Jan 26 16:20:21 crc kubenswrapper[4896]: I0126 16:20:21.314393 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerStarted","Data":"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda"} Jan 26 16:20:24 crc kubenswrapper[4896]: I0126 16:20:24.359460 4896 generic.go:334] "Generic (PLEG): container finished" podID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerID="1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda" exitCode=0 Jan 26 16:20:24 crc kubenswrapper[4896]: I0126 16:20:24.359549 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerDied","Data":"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda"} Jan 26 16:20:25 crc kubenswrapper[4896]: I0126 16:20:25.378230 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerStarted","Data":"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc"} Jan 26 16:20:25 crc kubenswrapper[4896]: I0126 16:20:25.404695 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6k2cx" podStartSLOduration=2.598609818 podStartE2EDuration="8.404670665s" podCreationTimestamp="2026-01-26 16:20:17 +0000 UTC" firstStartedPulling="2026-01-26 16:20:19.287442272 +0000 UTC m=+2777.069322665" lastFinishedPulling="2026-01-26 16:20:25.093503109 +0000 UTC m=+2782.875383512" observedRunningTime="2026-01-26 16:20:25.398905885 +0000 UTC m=+2783.180786288" watchObservedRunningTime="2026-01-26 16:20:25.404670665 +0000 UTC m=+2783.186551068" Jan 26 16:20:28 crc kubenswrapper[4896]: I0126 16:20:28.241038 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:28 crc kubenswrapper[4896]: I0126 16:20:28.241337 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:29 crc kubenswrapper[4896]: I0126 16:20:29.308710 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6k2cx" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="registry-server" probeResult="failure" output=< Jan 26 16:20:29 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:20:29 crc kubenswrapper[4896]: > Jan 26 16:20:38 crc kubenswrapper[4896]: I0126 16:20:38.290740 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:38 crc kubenswrapper[4896]: I0126 16:20:38.356253 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:38 crc kubenswrapper[4896]: I0126 16:20:38.540977 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:39 crc kubenswrapper[4896]: I0126 16:20:39.549075 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6k2cx" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="registry-server" containerID="cri-o://438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc" gracePeriod=2 Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.144685 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.250110 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content\") pod \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.250410 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k598\" (UniqueName: \"kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598\") pod \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.250733 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities\") pod \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\" (UID: \"0ad9dc3f-e35b-4c27-bf47-d0980d29945b\") " Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.251183 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities" (OuterVolumeSpecName: "utilities") pod "0ad9dc3f-e35b-4c27-bf47-d0980d29945b" (UID: "0ad9dc3f-e35b-4c27-bf47-d0980d29945b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.251535 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.256380 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598" (OuterVolumeSpecName: "kube-api-access-8k598") pod "0ad9dc3f-e35b-4c27-bf47-d0980d29945b" (UID: "0ad9dc3f-e35b-4c27-bf47-d0980d29945b"). InnerVolumeSpecName "kube-api-access-8k598". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.361149 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k598\" (UniqueName: \"kubernetes.io/projected/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-kube-api-access-8k598\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.409948 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ad9dc3f-e35b-4c27-bf47-d0980d29945b" (UID: "0ad9dc3f-e35b-4c27-bf47-d0980d29945b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.464120 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad9dc3f-e35b-4c27-bf47-d0980d29945b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.564569 4896 generic.go:334] "Generic (PLEG): container finished" podID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerID="438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc" exitCode=0 Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.564685 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerDied","Data":"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc"} Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.564709 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6k2cx" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.564753 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6k2cx" event={"ID":"0ad9dc3f-e35b-4c27-bf47-d0980d29945b","Type":"ContainerDied","Data":"1a7c4b8c21f5bc3120373af5c56b65f3650549cf9b31fe1c76903ef800b7505e"} Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.564777 4896 scope.go:117] "RemoveContainer" containerID="438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.610340 4896 scope.go:117] "RemoveContainer" containerID="1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.611001 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.621927 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6k2cx"] Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.638632 4896 scope.go:117] "RemoveContainer" containerID="e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.690886 4896 scope.go:117] "RemoveContainer" containerID="438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc" Jan 26 16:20:40 crc kubenswrapper[4896]: E0126 16:20:40.691235 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc\": container with ID starting with 438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc not found: ID does not exist" containerID="438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.691269 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc"} err="failed to get container status \"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc\": rpc error: code = NotFound desc = could not find container \"438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc\": container with ID starting with 438c14342548ce62ac0d76ea3d86497409b21062c0b6b30c6f9808e99d68ccfc not found: ID does not exist" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.691291 4896 scope.go:117] "RemoveContainer" containerID="1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda" Jan 26 16:20:40 crc kubenswrapper[4896]: E0126 16:20:40.692469 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda\": container with ID starting with 1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda not found: ID does not exist" containerID="1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.692532 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda"} err="failed to get container status \"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda\": rpc error: code = NotFound desc = could not find container \"1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda\": container with ID starting with 1dfd5cd629f3c29d415f409b105cf27d066a25bd62894378f07b52ba5a8c6bda not found: ID does not exist" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.692572 4896 scope.go:117] "RemoveContainer" containerID="e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110" Jan 26 16:20:40 crc kubenswrapper[4896]: E0126 16:20:40.695045 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110\": container with ID starting with e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110 not found: ID does not exist" containerID="e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.695087 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110"} err="failed to get container status \"e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110\": rpc error: code = NotFound desc = could not find container \"e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110\": container with ID starting with e9073f1e521df081885d1fbd0528c1577247baa0a3a49cd939875c4f0dbc2110 not found: ID does not exist" Jan 26 16:20:40 crc kubenswrapper[4896]: I0126 16:20:40.774079 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" path="/var/lib/kubelet/pods/0ad9dc3f-e35b-4c27-bf47-d0980d29945b/volumes" Jan 26 16:21:48 crc kubenswrapper[4896]: I0126 16:21:48.813711 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:21:48 crc kubenswrapper[4896]: I0126 16:21:48.814187 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.483045 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:14 crc kubenswrapper[4896]: E0126 16:22:14.484452 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="extract-utilities" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.484475 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="extract-utilities" Jan 26 16:22:14 crc kubenswrapper[4896]: E0126 16:22:14.484536 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="extract-content" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.484547 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="extract-content" Jan 26 16:22:14 crc kubenswrapper[4896]: E0126 16:22:14.484655 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="registry-server" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.484666 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="registry-server" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.484988 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9dc3f-e35b-4c27-bf47-d0980d29945b" containerName="registry-server" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.487440 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.517761 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.627567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.627695 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.627738 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6nd\" (UniqueName: \"kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.730935 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.731379 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.731557 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6nd\" (UniqueName: \"kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.732455 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.732937 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.771532 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6nd\" (UniqueName: \"kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd\") pod \"certified-operators-wmzz2\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:14 crc kubenswrapper[4896]: I0126 16:22:14.816180 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:15 crc kubenswrapper[4896]: I0126 16:22:15.461749 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:16 crc kubenswrapper[4896]: I0126 16:22:16.130607 4896 generic.go:334] "Generic (PLEG): container finished" podID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerID="254f80f5310304df909d5f05767ce1bafe53d4fd396942400220aae453b7df5b" exitCode=0 Jan 26 16:22:16 crc kubenswrapper[4896]: I0126 16:22:16.131607 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerDied","Data":"254f80f5310304df909d5f05767ce1bafe53d4fd396942400220aae453b7df5b"} Jan 26 16:22:16 crc kubenswrapper[4896]: I0126 16:22:16.131657 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerStarted","Data":"8e2ad0ad4f950e277321bc0d82cc43fa4a1dc8d29886c2b8c00fae6fd474ce8e"} Jan 26 16:22:17 crc kubenswrapper[4896]: I0126 16:22:17.144370 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerStarted","Data":"ae10ad0cf5e30ea36ca2ec60dccc400ed56286f4d80998078e98c63d02d01696"} Jan 26 16:22:18 crc kubenswrapper[4896]: I0126 16:22:18.157950 4896 generic.go:334] "Generic (PLEG): container finished" podID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerID="ae10ad0cf5e30ea36ca2ec60dccc400ed56286f4d80998078e98c63d02d01696" exitCode=0 Jan 26 16:22:18 crc kubenswrapper[4896]: I0126 16:22:18.158047 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerDied","Data":"ae10ad0cf5e30ea36ca2ec60dccc400ed56286f4d80998078e98c63d02d01696"} Jan 26 16:22:18 crc kubenswrapper[4896]: I0126 16:22:18.814268 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:22:18 crc kubenswrapper[4896]: I0126 16:22:18.814710 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:19 crc kubenswrapper[4896]: I0126 16:22:19.170222 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerStarted","Data":"68086d56a36bbe91a0301c56a3df603a324ae0954ba5f85353eec11efd2520d9"} Jan 26 16:22:19 crc kubenswrapper[4896]: I0126 16:22:19.206008 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmzz2" podStartSLOduration=2.7374630460000002 podStartE2EDuration="5.205970633s" podCreationTimestamp="2026-01-26 16:22:14 +0000 UTC" firstStartedPulling="2026-01-26 16:22:16.132553484 +0000 UTC m=+2893.914433877" lastFinishedPulling="2026-01-26 16:22:18.601061071 +0000 UTC m=+2896.382941464" observedRunningTime="2026-01-26 16:22:19.193376007 +0000 UTC m=+2896.975256410" watchObservedRunningTime="2026-01-26 16:22:19.205970633 +0000 UTC m=+2896.987851016" Jan 26 16:22:24 crc kubenswrapper[4896]: I0126 16:22:24.817686 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:24 crc kubenswrapper[4896]: I0126 16:22:24.818288 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:24 crc kubenswrapper[4896]: I0126 16:22:24.868984 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:25 crc kubenswrapper[4896]: I0126 16:22:25.316710 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:25 crc kubenswrapper[4896]: I0126 16:22:25.396673 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:27 crc kubenswrapper[4896]: I0126 16:22:27.264178 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmzz2" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="registry-server" containerID="cri-o://68086d56a36bbe91a0301c56a3df603a324ae0954ba5f85353eec11efd2520d9" gracePeriod=2 Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.279098 4896 generic.go:334] "Generic (PLEG): container finished" podID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerID="68086d56a36bbe91a0301c56a3df603a324ae0954ba5f85353eec11efd2520d9" exitCode=0 Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.279494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerDied","Data":"68086d56a36bbe91a0301c56a3df603a324ae0954ba5f85353eec11efd2520d9"} Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.279520 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmzz2" event={"ID":"17a95fa5-440c-496f-bbed-3017e7cc301c","Type":"ContainerDied","Data":"8e2ad0ad4f950e277321bc0d82cc43fa4a1dc8d29886c2b8c00fae6fd474ce8e"} Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.279567 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2ad0ad4f950e277321bc0d82cc43fa4a1dc8d29886c2b8c00fae6fd474ce8e" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.343523 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.511355 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content\") pod \"17a95fa5-440c-496f-bbed-3017e7cc301c\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.511424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k6nd\" (UniqueName: \"kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd\") pod \"17a95fa5-440c-496f-bbed-3017e7cc301c\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.511461 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities\") pod \"17a95fa5-440c-496f-bbed-3017e7cc301c\" (UID: \"17a95fa5-440c-496f-bbed-3017e7cc301c\") " Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.513134 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities" (OuterVolumeSpecName: "utilities") pod "17a95fa5-440c-496f-bbed-3017e7cc301c" (UID: "17a95fa5-440c-496f-bbed-3017e7cc301c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.519276 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd" (OuterVolumeSpecName: "kube-api-access-2k6nd") pod "17a95fa5-440c-496f-bbed-3017e7cc301c" (UID: "17a95fa5-440c-496f-bbed-3017e7cc301c"). InnerVolumeSpecName "kube-api-access-2k6nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.562459 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17a95fa5-440c-496f-bbed-3017e7cc301c" (UID: "17a95fa5-440c-496f-bbed-3017e7cc301c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.614650 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.614686 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k6nd\" (UniqueName: \"kubernetes.io/projected/17a95fa5-440c-496f-bbed-3017e7cc301c-kube-api-access-2k6nd\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:28 crc kubenswrapper[4896]: I0126 16:22:28.614700 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a95fa5-440c-496f-bbed-3017e7cc301c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:29 crc kubenswrapper[4896]: I0126 16:22:29.290013 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmzz2" Jan 26 16:22:29 crc kubenswrapper[4896]: I0126 16:22:29.323205 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:29 crc kubenswrapper[4896]: I0126 16:22:29.340272 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmzz2"] Jan 26 16:22:30 crc kubenswrapper[4896]: I0126 16:22:30.773085 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" path="/var/lib/kubelet/pods/17a95fa5-440c-496f-bbed-3017e7cc301c/volumes" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.292638 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:44 crc kubenswrapper[4896]: E0126 16:22:44.293769 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="extract-content" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.293784 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="extract-content" Jan 26 16:22:44 crc kubenswrapper[4896]: E0126 16:22:44.293810 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="registry-server" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.293816 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="registry-server" Jan 26 16:22:44 crc kubenswrapper[4896]: E0126 16:22:44.293845 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="extract-utilities" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.293851 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="extract-utilities" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.294091 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a95fa5-440c-496f-bbed-3017e7cc301c" containerName="registry-server" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.296740 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.313816 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.449558 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.449719 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.450040 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sspl\" (UniqueName: \"kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.552731 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.553032 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sspl\" (UniqueName: \"kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.553131 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.553382 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.553663 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.626113 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sspl\" (UniqueName: \"kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl\") pod \"redhat-marketplace-q7ss4\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:44 crc kubenswrapper[4896]: I0126 16:22:44.924877 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:45 crc kubenswrapper[4896]: I0126 16:22:45.500442 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:46 crc kubenswrapper[4896]: I0126 16:22:46.521421 4896 generic.go:334] "Generic (PLEG): container finished" podID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerID="f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0" exitCode=0 Jan 26 16:22:46 crc kubenswrapper[4896]: I0126 16:22:46.521617 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerDied","Data":"f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0"} Jan 26 16:22:46 crc kubenswrapper[4896]: I0126 16:22:46.521731 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerStarted","Data":"f1bc3b531cfb3380300d64b0c8a02c6d175c9d7b1f34987d243cf95a2c30c3a1"} Jan 26 16:22:47 crc kubenswrapper[4896]: I0126 16:22:47.533735 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerStarted","Data":"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d"} Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.546649 4896 generic.go:334] "Generic (PLEG): container finished" podID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerID="2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d" exitCode=0 Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.546730 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerDied","Data":"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d"} Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.813534 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.813621 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.813674 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.814842 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:22:48 crc kubenswrapper[4896]: I0126 16:22:48.814919 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc" gracePeriod=600 Jan 26 16:22:49 crc kubenswrapper[4896]: I0126 16:22:49.562726 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc" exitCode=0 Jan 26 16:22:49 crc kubenswrapper[4896]: I0126 16:22:49.563146 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc"} Jan 26 16:22:49 crc kubenswrapper[4896]: I0126 16:22:49.563178 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1"} Jan 26 16:22:49 crc kubenswrapper[4896]: I0126 16:22:49.563197 4896 scope.go:117] "RemoveContainer" containerID="0d1274f7e5735274b6d5b903bed2daa59306320464a2682c9af9bb84c5aace86" Jan 26 16:22:49 crc kubenswrapper[4896]: I0126 16:22:49.570038 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerStarted","Data":"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a"} Jan 26 16:22:54 crc kubenswrapper[4896]: I0126 16:22:54.925537 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:54 crc kubenswrapper[4896]: I0126 16:22:54.927732 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:55 crc kubenswrapper[4896]: I0126 16:22:55.046689 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:55 crc kubenswrapper[4896]: I0126 16:22:55.075443 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7ss4" podStartSLOduration=8.601858436 podStartE2EDuration="11.075418235s" podCreationTimestamp="2026-01-26 16:22:44 +0000 UTC" firstStartedPulling="2026-01-26 16:22:46.523610479 +0000 UTC m=+2924.305490872" lastFinishedPulling="2026-01-26 16:22:48.997170278 +0000 UTC m=+2926.779050671" observedRunningTime="2026-01-26 16:22:49.611041757 +0000 UTC m=+2927.392922160" watchObservedRunningTime="2026-01-26 16:22:55.075418235 +0000 UTC m=+2932.857298628" Jan 26 16:22:55 crc kubenswrapper[4896]: I0126 16:22:55.798571 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:55 crc kubenswrapper[4896]: I0126 16:22:55.881120 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:57 crc kubenswrapper[4896]: I0126 16:22:57.773670 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7ss4" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="registry-server" containerID="cri-o://b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a" gracePeriod=2 Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.325041 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.459179 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content\") pod \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.459296 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities\") pod \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.460239 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities" (OuterVolumeSpecName: "utilities") pod "1e0ccb90-d531-4b0e-8025-22c58c42fb50" (UID: "1e0ccb90-d531-4b0e-8025-22c58c42fb50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.460369 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sspl\" (UniqueName: \"kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl\") pod \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\" (UID: \"1e0ccb90-d531-4b0e-8025-22c58c42fb50\") " Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.462171 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.466449 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl" (OuterVolumeSpecName: "kube-api-access-5sspl") pod "1e0ccb90-d531-4b0e-8025-22c58c42fb50" (UID: "1e0ccb90-d531-4b0e-8025-22c58c42fb50"). InnerVolumeSpecName "kube-api-access-5sspl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.481087 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e0ccb90-d531-4b0e-8025-22c58c42fb50" (UID: "1e0ccb90-d531-4b0e-8025-22c58c42fb50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.564034 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ccb90-d531-4b0e-8025-22c58c42fb50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.564067 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sspl\" (UniqueName: \"kubernetes.io/projected/1e0ccb90-d531-4b0e-8025-22c58c42fb50-kube-api-access-5sspl\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.883679 4896 generic.go:334] "Generic (PLEG): container finished" podID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerID="b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a" exitCode=0 Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.884061 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerDied","Data":"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a"} Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.884088 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7ss4" event={"ID":"1e0ccb90-d531-4b0e-8025-22c58c42fb50","Type":"ContainerDied","Data":"f1bc3b531cfb3380300d64b0c8a02c6d175c9d7b1f34987d243cf95a2c30c3a1"} Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.884104 4896 scope.go:117] "RemoveContainer" containerID="b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.884244 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7ss4" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.920605 4896 scope.go:117] "RemoveContainer" containerID="2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.924062 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.945044 4896 scope.go:117] "RemoveContainer" containerID="f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0" Jan 26 16:22:58 crc kubenswrapper[4896]: I0126 16:22:58.957063 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7ss4"] Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.004661 4896 scope.go:117] "RemoveContainer" containerID="b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a" Jan 26 16:22:59 crc kubenswrapper[4896]: E0126 16:22:59.005229 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a\": container with ID starting with b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a not found: ID does not exist" containerID="b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a" Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.005275 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a"} err="failed to get container status \"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a\": rpc error: code = NotFound desc = could not find container \"b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a\": container with ID starting with b3524cc62e88e2387a94530b4d019fae2f4d5ae304b1073aa4a68ecc3acd300a not found: ID does not exist" Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.005304 4896 scope.go:117] "RemoveContainer" containerID="2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d" Jan 26 16:22:59 crc kubenswrapper[4896]: E0126 16:22:59.005731 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d\": container with ID starting with 2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d not found: ID does not exist" containerID="2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d" Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.005807 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d"} err="failed to get container status \"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d\": rpc error: code = NotFound desc = could not find container \"2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d\": container with ID starting with 2498a9d3590d3da4efc26c2b2fb00603bce904be85346e9d14efe71c56571c0d not found: ID does not exist" Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.005840 4896 scope.go:117] "RemoveContainer" containerID="f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0" Jan 26 16:22:59 crc kubenswrapper[4896]: E0126 16:22:59.006252 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0\": container with ID starting with f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0 not found: ID does not exist" containerID="f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0" Jan 26 16:22:59 crc kubenswrapper[4896]: I0126 16:22:59.006282 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0"} err="failed to get container status \"f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0\": rpc error: code = NotFound desc = could not find container \"f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0\": container with ID starting with f1187712f1a806dd35bc306dc1dc7a240530f4b35f59df805c2d9e72c0136cd0 not found: ID does not exist" Jan 26 16:23:00 crc kubenswrapper[4896]: I0126 16:23:00.780097 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" path="/var/lib/kubelet/pods/1e0ccb90-d531-4b0e-8025-22c58c42fb50/volumes" Jan 26 16:24:34 crc kubenswrapper[4896]: I0126 16:24:34.146682 4896 generic.go:334] "Generic (PLEG): container finished" podID="019cb762-55d7-4c9d-a425-fde89665ac76" containerID="59f8862bf8f05cd7293c67d57880bdfa48dca56ac63181e91fcedd6e3a453aeb" exitCode=0 Jan 26 16:24:34 crc kubenswrapper[4896]: I0126 16:24:34.146770 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" event={"ID":"019cb762-55d7-4c9d-a425-fde89665ac76","Type":"ContainerDied","Data":"59f8862bf8f05cd7293c67d57880bdfa48dca56ac63181e91fcedd6e3a453aeb"} Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.722706 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.757783 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam\") pod \"019cb762-55d7-4c9d-a425-fde89665ac76\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.757871 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle\") pod \"019cb762-55d7-4c9d-a425-fde89665ac76\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.758007 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0\") pod \"019cb762-55d7-4c9d-a425-fde89665ac76\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.758209 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jjz\" (UniqueName: \"kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz\") pod \"019cb762-55d7-4c9d-a425-fde89665ac76\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.758285 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory\") pod \"019cb762-55d7-4c9d-a425-fde89665ac76\" (UID: \"019cb762-55d7-4c9d-a425-fde89665ac76\") " Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.764553 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "019cb762-55d7-4c9d-a425-fde89665ac76" (UID: "019cb762-55d7-4c9d-a425-fde89665ac76"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.764749 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz" (OuterVolumeSpecName: "kube-api-access-p2jjz") pod "019cb762-55d7-4c9d-a425-fde89665ac76" (UID: "019cb762-55d7-4c9d-a425-fde89665ac76"). InnerVolumeSpecName "kube-api-access-p2jjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.773905 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jjz\" (UniqueName: \"kubernetes.io/projected/019cb762-55d7-4c9d-a425-fde89665ac76-kube-api-access-p2jjz\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.773933 4896 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.796948 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "019cb762-55d7-4c9d-a425-fde89665ac76" (UID: "019cb762-55d7-4c9d-a425-fde89665ac76"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.798154 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory" (OuterVolumeSpecName: "inventory") pod "019cb762-55d7-4c9d-a425-fde89665ac76" (UID: "019cb762-55d7-4c9d-a425-fde89665ac76"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.798638 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "019cb762-55d7-4c9d-a425-fde89665ac76" (UID: "019cb762-55d7-4c9d-a425-fde89665ac76"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.876216 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.876258 4896 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:35 crc kubenswrapper[4896]: I0126 16:24:35.876268 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/019cb762-55d7-4c9d-a425-fde89665ac76-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.171965 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" event={"ID":"019cb762-55d7-4c9d-a425-fde89665ac76","Type":"ContainerDied","Data":"4e4780e63128f5d1756eb1aa1a53dd88037f3d2576d39d9a932e417475ed1a00"} Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.172033 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5q8st" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.172037 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4780e63128f5d1756eb1aa1a53dd88037f3d2576d39d9a932e417475ed1a00" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.281237 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h"] Jan 26 16:24:36 crc kubenswrapper[4896]: E0126 16:24:36.281860 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="registry-server" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.281884 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="registry-server" Jan 26 16:24:36 crc kubenswrapper[4896]: E0126 16:24:36.281910 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019cb762-55d7-4c9d-a425-fde89665ac76" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.281918 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="019cb762-55d7-4c9d-a425-fde89665ac76" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:24:36 crc kubenswrapper[4896]: E0126 16:24:36.281938 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="extract-utilities" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.281944 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="extract-utilities" Jan 26 16:24:36 crc kubenswrapper[4896]: E0126 16:24:36.281987 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="extract-content" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.281995 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="extract-content" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.282240 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0ccb90-d531-4b0e-8025-22c58c42fb50" containerName="registry-server" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.282266 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="019cb762-55d7-4c9d-a425-fde89665ac76" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.283238 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.285718 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.285781 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.285891 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.286179 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.286970 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.287224 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.291180 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.303431 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h"] Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.387954 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388009 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388097 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388141 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388182 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388211 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388250 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388280 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.388310 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p62m\" (UniqueName: \"kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.490722 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491100 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491224 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491269 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491315 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491348 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491386 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491427 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.491457 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p62m\" (UniqueName: \"kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.493453 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.498311 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.498532 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.499214 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.500151 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.503262 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.503870 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.503936 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.513391 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p62m\" (UniqueName: \"kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wmb6h\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:36 crc kubenswrapper[4896]: I0126 16:24:36.604398 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:24:37 crc kubenswrapper[4896]: I0126 16:24:37.193142 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h"] Jan 26 16:24:37 crc kubenswrapper[4896]: I0126 16:24:37.205995 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:24:38 crc kubenswrapper[4896]: I0126 16:24:38.199484 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" event={"ID":"c768c99c-1655-4c81-9eea-6676fc125f3d","Type":"ContainerStarted","Data":"027ed3f85ae2b9553e5909276b36a6e40fe56e11abfb37a4914414b681b46817"} Jan 26 16:24:44 crc kubenswrapper[4896]: I0126 16:24:44.301415 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" event={"ID":"c768c99c-1655-4c81-9eea-6676fc125f3d","Type":"ContainerStarted","Data":"75db420319e34452d4c8d4a2bd56318ae462cd2c5a6389a713728fc8e846d66c"} Jan 26 16:24:44 crc kubenswrapper[4896]: I0126 16:24:44.318888 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" podStartSLOduration=3.472651331 podStartE2EDuration="8.318846419s" podCreationTimestamp="2026-01-26 16:24:36 +0000 UTC" firstStartedPulling="2026-01-26 16:24:37.205634455 +0000 UTC m=+3034.987514848" lastFinishedPulling="2026-01-26 16:24:42.051829533 +0000 UTC m=+3039.833709936" observedRunningTime="2026-01-26 16:24:44.317001385 +0000 UTC m=+3042.098881778" watchObservedRunningTime="2026-01-26 16:24:44.318846419 +0000 UTC m=+3042.100726822" Jan 26 16:25:18 crc kubenswrapper[4896]: I0126 16:25:18.813521 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:25:18 crc kubenswrapper[4896]: I0126 16:25:18.814112 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.630461 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.634483 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.677643 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.756978 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.757182 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.757262 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmff\" (UniqueName: \"kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.859430 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbmff\" (UniqueName: \"kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.859641 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.860021 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.860331 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.860442 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.881382 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbmff\" (UniqueName: \"kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff\") pod \"community-operators-n96kr\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:25 crc kubenswrapper[4896]: I0126 16:25:25.980175 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:26 crc kubenswrapper[4896]: I0126 16:25:26.612141 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:26 crc kubenswrapper[4896]: W0126 16:25:26.631154 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1576adcb_101e_4f90_bf6e_4570008fc12c.slice/crio-e4cfec55ef06edbdcb42e7c8afbc47529e95cac085b1fd1a25a89c9ab74d3120 WatchSource:0}: Error finding container e4cfec55ef06edbdcb42e7c8afbc47529e95cac085b1fd1a25a89c9ab74d3120: Status 404 returned error can't find the container with id e4cfec55ef06edbdcb42e7c8afbc47529e95cac085b1fd1a25a89c9ab74d3120 Jan 26 16:25:27 crc kubenswrapper[4896]: I0126 16:25:27.134825 4896 generic.go:334] "Generic (PLEG): container finished" podID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerID="222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8" exitCode=0 Jan 26 16:25:27 crc kubenswrapper[4896]: I0126 16:25:27.134874 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerDied","Data":"222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8"} Jan 26 16:25:27 crc kubenswrapper[4896]: I0126 16:25:27.134902 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerStarted","Data":"e4cfec55ef06edbdcb42e7c8afbc47529e95cac085b1fd1a25a89c9ab74d3120"} Jan 26 16:25:28 crc kubenswrapper[4896]: I0126 16:25:28.153387 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerStarted","Data":"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a"} Jan 26 16:25:30 crc kubenswrapper[4896]: I0126 16:25:30.179363 4896 generic.go:334] "Generic (PLEG): container finished" podID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerID="074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a" exitCode=0 Jan 26 16:25:30 crc kubenswrapper[4896]: I0126 16:25:30.179474 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerDied","Data":"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a"} Jan 26 16:25:31 crc kubenswrapper[4896]: I0126 16:25:31.193054 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerStarted","Data":"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b"} Jan 26 16:25:31 crc kubenswrapper[4896]: I0126 16:25:31.223734 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n96kr" podStartSLOduration=2.503677582 podStartE2EDuration="6.223713432s" podCreationTimestamp="2026-01-26 16:25:25 +0000 UTC" firstStartedPulling="2026-01-26 16:25:27.136968457 +0000 UTC m=+3084.918848850" lastFinishedPulling="2026-01-26 16:25:30.857004297 +0000 UTC m=+3088.638884700" observedRunningTime="2026-01-26 16:25:31.220191126 +0000 UTC m=+3089.002071529" watchObservedRunningTime="2026-01-26 16:25:31.223713432 +0000 UTC m=+3089.005593825" Jan 26 16:25:35 crc kubenswrapper[4896]: I0126 16:25:35.981096 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:35 crc kubenswrapper[4896]: I0126 16:25:35.981829 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:36 crc kubenswrapper[4896]: I0126 16:25:36.030376 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:36 crc kubenswrapper[4896]: I0126 16:25:36.312586 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:36 crc kubenswrapper[4896]: I0126 16:25:36.366176 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.276854 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n96kr" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="registry-server" containerID="cri-o://7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b" gracePeriod=2 Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.777847 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.913404 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbmff\" (UniqueName: \"kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff\") pod \"1576adcb-101e-4f90-bf6e-4570008fc12c\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.914027 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content\") pod \"1576adcb-101e-4f90-bf6e-4570008fc12c\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.914122 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities\") pod \"1576adcb-101e-4f90-bf6e-4570008fc12c\" (UID: \"1576adcb-101e-4f90-bf6e-4570008fc12c\") " Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.916254 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities" (OuterVolumeSpecName: "utilities") pod "1576adcb-101e-4f90-bf6e-4570008fc12c" (UID: "1576adcb-101e-4f90-bf6e-4570008fc12c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.928391 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff" (OuterVolumeSpecName: "kube-api-access-kbmff") pod "1576adcb-101e-4f90-bf6e-4570008fc12c" (UID: "1576adcb-101e-4f90-bf6e-4570008fc12c"). InnerVolumeSpecName "kube-api-access-kbmff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:25:38 crc kubenswrapper[4896]: I0126 16:25:38.970428 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1576adcb-101e-4f90-bf6e-4570008fc12c" (UID: "1576adcb-101e-4f90-bf6e-4570008fc12c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.017213 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.017247 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1576adcb-101e-4f90-bf6e-4570008fc12c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.017257 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbmff\" (UniqueName: \"kubernetes.io/projected/1576adcb-101e-4f90-bf6e-4570008fc12c-kube-api-access-kbmff\") on node \"crc\" DevicePath \"\"" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.289336 4896 generic.go:334] "Generic (PLEG): container finished" podID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerID="7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b" exitCode=0 Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.289383 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerDied","Data":"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b"} Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.289450 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96kr" event={"ID":"1576adcb-101e-4f90-bf6e-4570008fc12c","Type":"ContainerDied","Data":"e4cfec55ef06edbdcb42e7c8afbc47529e95cac085b1fd1a25a89c9ab74d3120"} Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.289472 4896 scope.go:117] "RemoveContainer" containerID="7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.289479 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96kr" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.317223 4896 scope.go:117] "RemoveContainer" containerID="074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.345826 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.356407 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n96kr"] Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.357475 4896 scope.go:117] "RemoveContainer" containerID="222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.395454 4896 scope.go:117] "RemoveContainer" containerID="7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b" Jan 26 16:25:39 crc kubenswrapper[4896]: E0126 16:25:39.395833 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b\": container with ID starting with 7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b not found: ID does not exist" containerID="7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.395865 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b"} err="failed to get container status \"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b\": rpc error: code = NotFound desc = could not find container \"7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b\": container with ID starting with 7967134b83f9d556db69f174d122625e641798677772afb64078f010848edc0b not found: ID does not exist" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.395903 4896 scope.go:117] "RemoveContainer" containerID="074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a" Jan 26 16:25:39 crc kubenswrapper[4896]: E0126 16:25:39.396163 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a\": container with ID starting with 074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a not found: ID does not exist" containerID="074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.396189 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a"} err="failed to get container status \"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a\": rpc error: code = NotFound desc = could not find container \"074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a\": container with ID starting with 074a50dd7a4e25dfce69df3ae59a2c02296ad45a2124b816b883a1ec39e4407a not found: ID does not exist" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.396204 4896 scope.go:117] "RemoveContainer" containerID="222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8" Jan 26 16:25:39 crc kubenswrapper[4896]: E0126 16:25:39.396424 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8\": container with ID starting with 222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8 not found: ID does not exist" containerID="222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8" Jan 26 16:25:39 crc kubenswrapper[4896]: I0126 16:25:39.396462 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8"} err="failed to get container status \"222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8\": rpc error: code = NotFound desc = could not find container \"222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8\": container with ID starting with 222f9bfe0888926f0375d238f5e3e02540907eea5d9c34e969f480d28d82eee8 not found: ID does not exist" Jan 26 16:25:40 crc kubenswrapper[4896]: I0126 16:25:40.774831 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" path="/var/lib/kubelet/pods/1576adcb-101e-4f90-bf6e-4570008fc12c/volumes" Jan 26 16:25:48 crc kubenswrapper[4896]: I0126 16:25:48.814682 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:25:48 crc kubenswrapper[4896]: I0126 16:25:48.815271 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:26:18 crc kubenswrapper[4896]: I0126 16:26:18.818363 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:26:18 crc kubenswrapper[4896]: I0126 16:26:18.819025 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:26:18 crc kubenswrapper[4896]: I0126 16:26:18.819082 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:26:18 crc kubenswrapper[4896]: I0126 16:26:18.820258 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:26:18 crc kubenswrapper[4896]: I0126 16:26:18.820335 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" gracePeriod=600 Jan 26 16:26:18 crc kubenswrapper[4896]: E0126 16:26:18.957246 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:26:19 crc kubenswrapper[4896]: I0126 16:26:19.844854 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" exitCode=0 Jan 26 16:26:19 crc kubenswrapper[4896]: I0126 16:26:19.844919 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1"} Jan 26 16:26:19 crc kubenswrapper[4896]: I0126 16:26:19.845906 4896 scope.go:117] "RemoveContainer" containerID="38232dea192e6d1bbe2633ff918eb1f5d8a8a536073d7952e424ab2ad966b2cc" Jan 26 16:26:19 crc kubenswrapper[4896]: I0126 16:26:19.846838 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:26:19 crc kubenswrapper[4896]: E0126 16:26:19.847253 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:26:33 crc kubenswrapper[4896]: I0126 16:26:33.759451 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:26:33 crc kubenswrapper[4896]: E0126 16:26:33.760926 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:26:47 crc kubenswrapper[4896]: I0126 16:26:47.759934 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:26:47 crc kubenswrapper[4896]: E0126 16:26:47.785113 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:26:59 crc kubenswrapper[4896]: I0126 16:26:59.759599 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:26:59 crc kubenswrapper[4896]: E0126 16:26:59.760331 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:27:14 crc kubenswrapper[4896]: I0126 16:27:14.761723 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:27:14 crc kubenswrapper[4896]: E0126 16:27:14.762918 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:27:24 crc kubenswrapper[4896]: I0126 16:27:24.695847 4896 generic.go:334] "Generic (PLEG): container finished" podID="c768c99c-1655-4c81-9eea-6676fc125f3d" containerID="75db420319e34452d4c8d4a2bd56318ae462cd2c5a6389a713728fc8e846d66c" exitCode=0 Jan 26 16:27:24 crc kubenswrapper[4896]: I0126 16:27:24.695958 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" event={"ID":"c768c99c-1655-4c81-9eea-6676fc125f3d","Type":"ContainerDied","Data":"75db420319e34452d4c8d4a2bd56318ae462cd2c5a6389a713728fc8e846d66c"} Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.198296 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214305 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214403 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214448 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214471 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214511 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p62m\" (UniqueName: \"kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214620 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214664 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214703 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.214758 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1\") pod \"c768c99c-1655-4c81-9eea-6676fc125f3d\" (UID: \"c768c99c-1655-4c81-9eea-6676fc125f3d\") " Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.223543 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.226272 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m" (OuterVolumeSpecName: "kube-api-access-9p62m") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "kube-api-access-9p62m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.253727 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.262684 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory" (OuterVolumeSpecName: "inventory") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.265016 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.270147 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.273395 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.274331 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.277174 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c768c99c-1655-4c81-9eea-6676fc125f3d" (UID: "c768c99c-1655-4c81-9eea-6676fc125f3d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317707 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317739 4896 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317748 4896 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317759 4896 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317768 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p62m\" (UniqueName: \"kubernetes.io/projected/c768c99c-1655-4c81-9eea-6676fc125f3d-kube-api-access-9p62m\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317778 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317787 4896 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317796 4896 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.317806 4896 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c768c99c-1655-4c81-9eea-6676fc125f3d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.720134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" event={"ID":"c768c99c-1655-4c81-9eea-6676fc125f3d","Type":"ContainerDied","Data":"027ed3f85ae2b9553e5909276b36a6e40fe56e11abfb37a4914414b681b46817"} Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.720186 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027ed3f85ae2b9553e5909276b36a6e40fe56e11abfb37a4914414b681b46817" Jan 26 16:27:26 crc kubenswrapper[4896]: I0126 16:27:26.720206 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wmb6h" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.333941 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh"] Jan 26 16:27:27 crc kubenswrapper[4896]: E0126 16:27:27.334928 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="extract-content" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.334946 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="extract-content" Jan 26 16:27:27 crc kubenswrapper[4896]: E0126 16:27:27.334973 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c768c99c-1655-4c81-9eea-6676fc125f3d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.334982 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c768c99c-1655-4c81-9eea-6676fc125f3d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:27:27 crc kubenswrapper[4896]: E0126 16:27:27.334997 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="extract-utilities" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.335006 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="extract-utilities" Jan 26 16:27:27 crc kubenswrapper[4896]: E0126 16:27:27.335025 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="registry-server" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.335032 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="registry-server" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.335341 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1576adcb-101e-4f90-bf6e-4570008fc12c" containerName="registry-server" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.335370 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c768c99c-1655-4c81-9eea-6676fc125f3d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.336470 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.340360 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.340878 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.340927 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.340927 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.340937 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.373637 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh"] Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.509905 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510118 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510259 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510319 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510387 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510491 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.510520 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdsf\" (UniqueName: \"kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613428 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613491 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdsf\" (UniqueName: \"kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613611 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613653 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613724 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613761 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.613814 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.623417 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.623694 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.624007 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.624290 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.624605 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.637308 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.641255 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdsf\" (UniqueName: \"kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j54lh\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.670122 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:27:27 crc kubenswrapper[4896]: I0126 16:27:27.759079 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:27:27 crc kubenswrapper[4896]: E0126 16:27:27.759879 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:27:28 crc kubenswrapper[4896]: I0126 16:27:28.449570 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh"] Jan 26 16:27:28 crc kubenswrapper[4896]: I0126 16:27:28.744489 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" event={"ID":"328a79d0-c276-4dcb-812b-b2436c4031dc","Type":"ContainerStarted","Data":"cb9d1e3c45196e8405e8b41b8ee057a2e4f9033afe8227236e371c36917c3449"} Jan 26 16:27:29 crc kubenswrapper[4896]: I0126 16:27:29.758066 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" event={"ID":"328a79d0-c276-4dcb-812b-b2436c4031dc","Type":"ContainerStarted","Data":"56afa667721dd6b2c929007850863d8e47b45ed44642dc57bfadc9f3ca745c3f"} Jan 26 16:27:29 crc kubenswrapper[4896]: I0126 16:27:29.779086 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" podStartSLOduration=2.11424591 podStartE2EDuration="2.779065243s" podCreationTimestamp="2026-01-26 16:27:27 +0000 UTC" firstStartedPulling="2026-01-26 16:27:28.443293868 +0000 UTC m=+3206.225174261" lastFinishedPulling="2026-01-26 16:27:29.108113201 +0000 UTC m=+3206.889993594" observedRunningTime="2026-01-26 16:27:29.774199745 +0000 UTC m=+3207.556080158" watchObservedRunningTime="2026-01-26 16:27:29.779065243 +0000 UTC m=+3207.560945636" Jan 26 16:27:42 crc kubenswrapper[4896]: I0126 16:27:42.772865 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:27:42 crc kubenswrapper[4896]: E0126 16:27:42.773760 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:27:57 crc kubenswrapper[4896]: I0126 16:27:57.759201 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:27:57 crc kubenswrapper[4896]: E0126 16:27:57.760091 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:28:10 crc kubenswrapper[4896]: I0126 16:28:10.758910 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:28:10 crc kubenswrapper[4896]: E0126 16:28:10.759782 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:28:24 crc kubenswrapper[4896]: I0126 16:28:24.760636 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:28:24 crc kubenswrapper[4896]: E0126 16:28:24.761383 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:28:39 crc kubenswrapper[4896]: I0126 16:28:39.760848 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:28:39 crc kubenswrapper[4896]: E0126 16:28:39.762151 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:28:41 crc kubenswrapper[4896]: I0126 16:28:41.532375 4896 scope.go:117] "RemoveContainer" containerID="ae10ad0cf5e30ea36ca2ec60dccc400ed56286f4d80998078e98c63d02d01696" Jan 26 16:28:41 crc kubenswrapper[4896]: I0126 16:28:41.576853 4896 scope.go:117] "RemoveContainer" containerID="68086d56a36bbe91a0301c56a3df603a324ae0954ba5f85353eec11efd2520d9" Jan 26 16:28:41 crc kubenswrapper[4896]: I0126 16:28:41.789253 4896 scope.go:117] "RemoveContainer" containerID="254f80f5310304df909d5f05767ce1bafe53d4fd396942400220aae453b7df5b" Jan 26 16:28:51 crc kubenswrapper[4896]: I0126 16:28:51.759934 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:28:51 crc kubenswrapper[4896]: E0126 16:28:51.760907 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:29:06 crc kubenswrapper[4896]: I0126 16:29:06.760074 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:29:06 crc kubenswrapper[4896]: E0126 16:29:06.761004 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:29:20 crc kubenswrapper[4896]: I0126 16:29:20.759390 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:29:20 crc kubenswrapper[4896]: E0126 16:29:20.760481 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:29:35 crc kubenswrapper[4896]: I0126 16:29:35.760223 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:29:35 crc kubenswrapper[4896]: E0126 16:29:35.761235 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:29:46 crc kubenswrapper[4896]: I0126 16:29:46.760204 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:29:46 crc kubenswrapper[4896]: E0126 16:29:46.761537 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.157267 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh"] Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.161254 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.164618 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.164665 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.181232 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh"] Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.240029 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.240347 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9gp\" (UniqueName: \"kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.240683 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.343780 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.344028 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.344133 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9gp\" (UniqueName: \"kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.345733 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.350778 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.368146 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9gp\" (UniqueName: \"kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp\") pod \"collect-profiles-29490750-bg7lh\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.487379 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:00 crc kubenswrapper[4896]: I0126 16:30:00.772229 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:30:00 crc kubenswrapper[4896]: E0126 16:30:00.772703 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:30:01 crc kubenswrapper[4896]: I0126 16:30:01.140639 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh"] Jan 26 16:30:01 crc kubenswrapper[4896]: I0126 16:30:01.968527 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" event={"ID":"c8b634c1-bd6a-40fa-96d6-8a92a521b18e","Type":"ContainerStarted","Data":"ce38006309df248e42ef3840084ca5e951ed698b8acad43ed1e86dce5139c515"} Jan 26 16:30:01 crc kubenswrapper[4896]: I0126 16:30:01.968848 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" event={"ID":"c8b634c1-bd6a-40fa-96d6-8a92a521b18e","Type":"ContainerStarted","Data":"45e6996c2536bc12ac55e508aadf5e8770eb9d72c76d7473e645d44f6fd5aabe"} Jan 26 16:30:01 crc kubenswrapper[4896]: I0126 16:30:01.997917 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" podStartSLOduration=1.9977850529999999 podStartE2EDuration="1.997785053s" podCreationTimestamp="2026-01-26 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:30:01.983125813 +0000 UTC m=+3359.765006206" watchObservedRunningTime="2026-01-26 16:30:01.997785053 +0000 UTC m=+3359.779665446" Jan 26 16:30:02 crc kubenswrapper[4896]: I0126 16:30:02.981662 4896 generic.go:334] "Generic (PLEG): container finished" podID="c8b634c1-bd6a-40fa-96d6-8a92a521b18e" containerID="ce38006309df248e42ef3840084ca5e951ed698b8acad43ed1e86dce5139c515" exitCode=0 Jan 26 16:30:02 crc kubenswrapper[4896]: I0126 16:30:02.981935 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" event={"ID":"c8b634c1-bd6a-40fa-96d6-8a92a521b18e","Type":"ContainerDied","Data":"ce38006309df248e42ef3840084ca5e951ed698b8acad43ed1e86dce5139c515"} Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.542038 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.680043 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume\") pod \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.680177 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9gp\" (UniqueName: \"kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp\") pod \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.680273 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume\") pod \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\" (UID: \"c8b634c1-bd6a-40fa-96d6-8a92a521b18e\") " Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.681857 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8b634c1-bd6a-40fa-96d6-8a92a521b18e" (UID: "c8b634c1-bd6a-40fa-96d6-8a92a521b18e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.689311 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8b634c1-bd6a-40fa-96d6-8a92a521b18e" (UID: "c8b634c1-bd6a-40fa-96d6-8a92a521b18e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.689402 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp" (OuterVolumeSpecName: "kube-api-access-5f9gp") pod "c8b634c1-bd6a-40fa-96d6-8a92a521b18e" (UID: "c8b634c1-bd6a-40fa-96d6-8a92a521b18e"). InnerVolumeSpecName "kube-api-access-5f9gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.782864 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.782897 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9gp\" (UniqueName: \"kubernetes.io/projected/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-kube-api-access-5f9gp\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4896]: I0126 16:30:04.782910 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b634c1-bd6a-40fa-96d6-8a92a521b18e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:05 crc kubenswrapper[4896]: I0126 16:30:05.005405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" event={"ID":"c8b634c1-bd6a-40fa-96d6-8a92a521b18e","Type":"ContainerDied","Data":"45e6996c2536bc12ac55e508aadf5e8770eb9d72c76d7473e645d44f6fd5aabe"} Jan 26 16:30:05 crc kubenswrapper[4896]: I0126 16:30:05.005821 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e6996c2536bc12ac55e508aadf5e8770eb9d72c76d7473e645d44f6fd5aabe" Jan 26 16:30:05 crc kubenswrapper[4896]: I0126 16:30:05.005504 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh" Jan 26 16:30:05 crc kubenswrapper[4896]: I0126 16:30:05.064253 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd"] Jan 26 16:30:05 crc kubenswrapper[4896]: I0126 16:30:05.074691 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-gjjgd"] Jan 26 16:30:06 crc kubenswrapper[4896]: I0126 16:30:06.774502 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffef558-48ba-43a0-81e5-a8c5801b3e8e" path="/var/lib/kubelet/pods/fffef558-48ba-43a0-81e5-a8c5801b3e8e/volumes" Jan 26 16:30:15 crc kubenswrapper[4896]: I0126 16:30:15.119200 4896 generic.go:334] "Generic (PLEG): container finished" podID="328a79d0-c276-4dcb-812b-b2436c4031dc" containerID="56afa667721dd6b2c929007850863d8e47b45ed44642dc57bfadc9f3ca745c3f" exitCode=0 Jan 26 16:30:15 crc kubenswrapper[4896]: I0126 16:30:15.119304 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" event={"ID":"328a79d0-c276-4dcb-812b-b2436c4031dc","Type":"ContainerDied","Data":"56afa667721dd6b2c929007850863d8e47b45ed44642dc57bfadc9f3ca745c3f"} Jan 26 16:30:15 crc kubenswrapper[4896]: I0126 16:30:15.760542 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:30:15 crc kubenswrapper[4896]: E0126 16:30:15.761400 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.775344 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919343 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919434 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919678 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwdsf\" (UniqueName: \"kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919786 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919840 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919889 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.919919 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory\") pod \"328a79d0-c276-4dcb-812b-b2436c4031dc\" (UID: \"328a79d0-c276-4dcb-812b-b2436c4031dc\") " Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.928223 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf" (OuterVolumeSpecName: "kube-api-access-fwdsf") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "kube-api-access-fwdsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.936014 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.959311 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.960302 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.968811 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory" (OuterVolumeSpecName: "inventory") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.975116 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:16 crc kubenswrapper[4896]: I0126 16:30:16.976723 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "328a79d0-c276-4dcb-812b-b2436c4031dc" (UID: "328a79d0-c276-4dcb-812b-b2436c4031dc"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032583 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwdsf\" (UniqueName: \"kubernetes.io/projected/328a79d0-c276-4dcb-812b-b2436c4031dc-kube-api-access-fwdsf\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032710 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032724 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032739 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032755 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032768 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.032783 4896 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328a79d0-c276-4dcb-812b-b2436c4031dc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.152550 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" event={"ID":"328a79d0-c276-4dcb-812b-b2436c4031dc","Type":"ContainerDied","Data":"cb9d1e3c45196e8405e8b41b8ee057a2e4f9033afe8227236e371c36917c3449"} Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.152626 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9d1e3c45196e8405e8b41b8ee057a2e4f9033afe8227236e371c36917c3449" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.152677 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j54lh" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.315258 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf"] Jan 26 16:30:17 crc kubenswrapper[4896]: E0126 16:30:17.316374 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b634c1-bd6a-40fa-96d6-8a92a521b18e" containerName="collect-profiles" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.316406 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b634c1-bd6a-40fa-96d6-8a92a521b18e" containerName="collect-profiles" Jan 26 16:30:17 crc kubenswrapper[4896]: E0126 16:30:17.316444 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328a79d0-c276-4dcb-812b-b2436c4031dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.316504 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="328a79d0-c276-4dcb-812b-b2436c4031dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.317169 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="328a79d0-c276-4dcb-812b-b2436c4031dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.317221 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b634c1-bd6a-40fa-96d6-8a92a521b18e" containerName="collect-profiles" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.318934 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.322169 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.322269 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.323140 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.323187 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.326042 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.342227 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf"] Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.446937 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.447270 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.447562 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.447886 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jz4\" (UniqueName: \"kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.448004 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.448208 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.448256 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.550927 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.551084 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.551160 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.551242 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jz4\" (UniqueName: \"kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.551375 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.553047 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.553129 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.557520 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.559581 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.561956 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.565366 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.565667 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.570902 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.595850 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jz4\" (UniqueName: \"kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:17 crc kubenswrapper[4896]: I0126 16:30:17.648241 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:30:18 crc kubenswrapper[4896]: I0126 16:30:18.360683 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf"] Jan 26 16:30:18 crc kubenswrapper[4896]: I0126 16:30:18.363354 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:30:19 crc kubenswrapper[4896]: I0126 16:30:19.175048 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" event={"ID":"a205991d-d9c9-4d4f-b237-9198ac546ae1","Type":"ContainerStarted","Data":"d8d5f7b1f8314a16a3eaa6d3a438274946ecc8c58c8245dbb044ddbdc4c58b41"} Jan 26 16:30:20 crc kubenswrapper[4896]: I0126 16:30:20.191058 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" event={"ID":"a205991d-d9c9-4d4f-b237-9198ac546ae1","Type":"ContainerStarted","Data":"4599c6a2de591f847f5404017567eedace3ecd2f9cf07ff15dc3e370a90b68a7"} Jan 26 16:30:20 crc kubenswrapper[4896]: I0126 16:30:20.226956 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" podStartSLOduration=2.5386016529999997 podStartE2EDuration="3.226934841s" podCreationTimestamp="2026-01-26 16:30:17 +0000 UTC" firstStartedPulling="2026-01-26 16:30:18.362934074 +0000 UTC m=+3376.144814487" lastFinishedPulling="2026-01-26 16:30:19.051267282 +0000 UTC m=+3376.833147675" observedRunningTime="2026-01-26 16:30:20.212260442 +0000 UTC m=+3377.994140845" watchObservedRunningTime="2026-01-26 16:30:20.226934841 +0000 UTC m=+3378.008815234" Jan 26 16:30:27 crc kubenswrapper[4896]: I0126 16:30:27.760961 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:30:27 crc kubenswrapper[4896]: E0126 16:30:27.762016 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:30:40 crc kubenswrapper[4896]: I0126 16:30:40.805181 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:30:40 crc kubenswrapper[4896]: E0126 16:30:40.807329 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:30:41 crc kubenswrapper[4896]: I0126 16:30:41.875510 4896 scope.go:117] "RemoveContainer" containerID="3ec4d1c59ef9fe1d61badad9d57be3f5924ed0bee5bf98e8fe2854e3c64aa652" Jan 26 16:30:55 crc kubenswrapper[4896]: I0126 16:30:55.760600 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:30:55 crc kubenswrapper[4896]: E0126 16:30:55.761597 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:31:08 crc kubenswrapper[4896]: I0126 16:31:08.759971 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:31:08 crc kubenswrapper[4896]: E0126 16:31:08.760843 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.263514 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.267130 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.280779 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.552812 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhsjf\" (UniqueName: \"kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.552960 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.552996 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.655404 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhsjf\" (UniqueName: \"kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.655535 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.655588 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.656203 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.656204 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.676563 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhsjf\" (UniqueName: \"kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf\") pod \"redhat-operators-q9gxg\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:16 crc kubenswrapper[4896]: I0126 16:31:16.894918 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:17 crc kubenswrapper[4896]: I0126 16:31:17.439992 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:18 crc kubenswrapper[4896]: I0126 16:31:18.212509 4896 generic.go:334] "Generic (PLEG): container finished" podID="be6954c5-8372-4208-a29b-aa98b06452fc" containerID="35d1cbd4def72123d68962bacd21689598daf13f60e75d58027a1b059449fc20" exitCode=0 Jan 26 16:31:18 crc kubenswrapper[4896]: I0126 16:31:18.212668 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerDied","Data":"35d1cbd4def72123d68962bacd21689598daf13f60e75d58027a1b059449fc20"} Jan 26 16:31:18 crc kubenswrapper[4896]: I0126 16:31:18.212801 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerStarted","Data":"afc59c07da179714cea39fda77d1131f20443db68be676f85879d11e615dc1e7"} Jan 26 16:31:19 crc kubenswrapper[4896]: I0126 16:31:19.760913 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:31:20 crc kubenswrapper[4896]: I0126 16:31:20.384309 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817"} Jan 26 16:31:20 crc kubenswrapper[4896]: I0126 16:31:20.393853 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerStarted","Data":"e035d874b7b5b1b9f8800da23b2d69c1bd1e9bf2521c498f90f697976e10ac92"} Jan 26 16:31:25 crc kubenswrapper[4896]: I0126 16:31:25.506224 4896 generic.go:334] "Generic (PLEG): container finished" podID="be6954c5-8372-4208-a29b-aa98b06452fc" containerID="e035d874b7b5b1b9f8800da23b2d69c1bd1e9bf2521c498f90f697976e10ac92" exitCode=0 Jan 26 16:31:25 crc kubenswrapper[4896]: I0126 16:31:25.506917 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerDied","Data":"e035d874b7b5b1b9f8800da23b2d69c1bd1e9bf2521c498f90f697976e10ac92"} Jan 26 16:31:27 crc kubenswrapper[4896]: I0126 16:31:27.569722 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerStarted","Data":"a3f921e907e4b5718d65313fc8715126b62452fcc79c0dcc316cb48486d1e227"} Jan 26 16:31:27 crc kubenswrapper[4896]: I0126 16:31:27.599897 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q9gxg" podStartSLOduration=3.740498658 podStartE2EDuration="11.599670217s" podCreationTimestamp="2026-01-26 16:31:16 +0000 UTC" firstStartedPulling="2026-01-26 16:31:18.214637326 +0000 UTC m=+3435.996517719" lastFinishedPulling="2026-01-26 16:31:26.073808885 +0000 UTC m=+3443.855689278" observedRunningTime="2026-01-26 16:31:27.595872924 +0000 UTC m=+3445.377753317" watchObservedRunningTime="2026-01-26 16:31:27.599670217 +0000 UTC m=+3445.381550610" Jan 26 16:31:36 crc kubenswrapper[4896]: I0126 16:31:36.895509 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:36 crc kubenswrapper[4896]: I0126 16:31:36.896210 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:37 crc kubenswrapper[4896]: I0126 16:31:37.956258 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q9gxg" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="registry-server" probeResult="failure" output=< Jan 26 16:31:37 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:31:37 crc kubenswrapper[4896]: > Jan 26 16:31:46 crc kubenswrapper[4896]: I0126 16:31:46.967901 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:47 crc kubenswrapper[4896]: I0126 16:31:47.067935 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:47 crc kubenswrapper[4896]: I0126 16:31:47.464327 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:48 crc kubenswrapper[4896]: I0126 16:31:48.975147 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q9gxg" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="registry-server" containerID="cri-o://a3f921e907e4b5718d65313fc8715126b62452fcc79c0dcc316cb48486d1e227" gracePeriod=2 Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.002555 4896 generic.go:334] "Generic (PLEG): container finished" podID="be6954c5-8372-4208-a29b-aa98b06452fc" containerID="a3f921e907e4b5718d65313fc8715126b62452fcc79c0dcc316cb48486d1e227" exitCode=0 Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.002719 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerDied","Data":"a3f921e907e4b5718d65313fc8715126b62452fcc79c0dcc316cb48486d1e227"} Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.129907 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.236110 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhsjf\" (UniqueName: \"kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf\") pod \"be6954c5-8372-4208-a29b-aa98b06452fc\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.236455 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities\") pod \"be6954c5-8372-4208-a29b-aa98b06452fc\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.236619 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content\") pod \"be6954c5-8372-4208-a29b-aa98b06452fc\" (UID: \"be6954c5-8372-4208-a29b-aa98b06452fc\") " Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.237267 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities" (OuterVolumeSpecName: "utilities") pod "be6954c5-8372-4208-a29b-aa98b06452fc" (UID: "be6954c5-8372-4208-a29b-aa98b06452fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.246700 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf" (OuterVolumeSpecName: "kube-api-access-mhsjf") pod "be6954c5-8372-4208-a29b-aa98b06452fc" (UID: "be6954c5-8372-4208-a29b-aa98b06452fc"). InnerVolumeSpecName "kube-api-access-mhsjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.254212 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.254279 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhsjf\" (UniqueName: \"kubernetes.io/projected/be6954c5-8372-4208-a29b-aa98b06452fc-kube-api-access-mhsjf\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.395336 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be6954c5-8372-4208-a29b-aa98b06452fc" (UID: "be6954c5-8372-4208-a29b-aa98b06452fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:31:50 crc kubenswrapper[4896]: I0126 16:31:50.458805 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be6954c5-8372-4208-a29b-aa98b06452fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.018861 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9gxg" event={"ID":"be6954c5-8372-4208-a29b-aa98b06452fc","Type":"ContainerDied","Data":"afc59c07da179714cea39fda77d1131f20443db68be676f85879d11e615dc1e7"} Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.019230 4896 scope.go:117] "RemoveContainer" containerID="a3f921e907e4b5718d65313fc8715126b62452fcc79c0dcc316cb48486d1e227" Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.018926 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9gxg" Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.056264 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.061614 4896 scope.go:117] "RemoveContainer" containerID="e035d874b7b5b1b9f8800da23b2d69c1bd1e9bf2521c498f90f697976e10ac92" Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.066985 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q9gxg"] Jan 26 16:31:51 crc kubenswrapper[4896]: I0126 16:31:51.091420 4896 scope.go:117] "RemoveContainer" containerID="35d1cbd4def72123d68962bacd21689598daf13f60e75d58027a1b059449fc20" Jan 26 16:31:52 crc kubenswrapper[4896]: I0126 16:31:52.776152 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" path="/var/lib/kubelet/pods/be6954c5-8372-4208-a29b-aa98b06452fc/volumes" Jan 26 16:32:46 crc kubenswrapper[4896]: I0126 16:32:46.764957 4896 generic.go:334] "Generic (PLEG): container finished" podID="a205991d-d9c9-4d4f-b237-9198ac546ae1" containerID="4599c6a2de591f847f5404017567eedace3ecd2f9cf07ff15dc3e370a90b68a7" exitCode=0 Jan 26 16:32:46 crc kubenswrapper[4896]: I0126 16:32:46.777842 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" event={"ID":"a205991d-d9c9-4d4f-b237-9198ac546ae1","Type":"ContainerDied","Data":"4599c6a2de591f847f5404017567eedace3ecd2f9cf07ff15dc3e370a90b68a7"} Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.233404 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.337972 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jz4\" (UniqueName: \"kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.338490 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.338761 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.338875 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.338991 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.339116 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.339242 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam\") pod \"a205991d-d9c9-4d4f-b237-9198ac546ae1\" (UID: \"a205991d-d9c9-4d4f-b237-9198ac546ae1\") " Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.344163 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.345538 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4" (OuterVolumeSpecName: "kube-api-access-v4jz4") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "kube-api-access-v4jz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.375356 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.378740 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory" (OuterVolumeSpecName: "inventory") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.379108 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.380754 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.382088 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "a205991d-d9c9-4d4f-b237-9198ac546ae1" (UID: "a205991d-d9c9-4d4f-b237-9198ac546ae1"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442431 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4jz4\" (UniqueName: \"kubernetes.io/projected/a205991d-d9c9-4d4f-b237-9198ac546ae1-kube-api-access-v4jz4\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442479 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442493 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442508 4896 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442524 4896 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442539 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.442551 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a205991d-d9c9-4d4f-b237-9198ac546ae1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.784909 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" event={"ID":"a205991d-d9c9-4d4f-b237-9198ac546ae1","Type":"ContainerDied","Data":"d8d5f7b1f8314a16a3eaa6d3a438274946ecc8c58c8245dbb044ddbdc4c58b41"} Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.784996 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d5f7b1f8314a16a3eaa6d3a438274946ecc8c58c8245dbb044ddbdc4c58b41" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.785066 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.920851 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf"] Jan 26 16:32:48 crc kubenswrapper[4896]: E0126 16:32:48.921695 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="extract-utilities" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.921717 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="extract-utilities" Jan 26 16:32:48 crc kubenswrapper[4896]: E0126 16:32:48.921758 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a205991d-d9c9-4d4f-b237-9198ac546ae1" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.921766 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a205991d-d9c9-4d4f-b237-9198ac546ae1" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 16:32:48 crc kubenswrapper[4896]: E0126 16:32:48.921776 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="extract-content" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.921782 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="extract-content" Jan 26 16:32:48 crc kubenswrapper[4896]: E0126 16:32:48.921820 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="registry-server" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.921828 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="registry-server" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.922109 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6954c5-8372-4208-a29b-aa98b06452fc" containerName="registry-server" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.922139 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a205991d-d9c9-4d4f-b237-9198ac546ae1" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.923392 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.926424 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-48n6x" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.926736 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.927188 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.927229 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.927299 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 16:32:48 crc kubenswrapper[4896]: I0126 16:32:48.949892 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf"] Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.060249 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.060315 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.060447 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.060513 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbmz\" (UniqueName: \"kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.060567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.163644 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.163753 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbmz\" (UniqueName: \"kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.163821 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.164010 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.164063 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.169431 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.169500 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.173197 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.174277 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.183950 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbmz\" (UniqueName: \"kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mfqsf\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.240169 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:32:49 crc kubenswrapper[4896]: I0126 16:32:49.812161 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf"] Jan 26 16:32:49 crc kubenswrapper[4896]: W0126 16:32:49.820624 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod290731fa_7eff_41d9_bba9_b733370ac45b.slice/crio-ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719 WatchSource:0}: Error finding container ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719: Status 404 returned error can't find the container with id ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719 Jan 26 16:32:50 crc kubenswrapper[4896]: I0126 16:32:50.810434 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" event={"ID":"290731fa-7eff-41d9-bba9-b733370ac45b","Type":"ContainerStarted","Data":"835f8bed05b2bcda1d2322dc79b970829aa26a51060707fb62be7ef45a8a0909"} Jan 26 16:32:50 crc kubenswrapper[4896]: I0126 16:32:50.810895 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" event={"ID":"290731fa-7eff-41d9-bba9-b733370ac45b","Type":"ContainerStarted","Data":"ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719"} Jan 26 16:32:50 crc kubenswrapper[4896]: I0126 16:32:50.836197 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" podStartSLOduration=2.224586018 podStartE2EDuration="2.836171068s" podCreationTimestamp="2026-01-26 16:32:48 +0000 UTC" firstStartedPulling="2026-01-26 16:32:49.823620204 +0000 UTC m=+3527.605500597" lastFinishedPulling="2026-01-26 16:32:50.435205254 +0000 UTC m=+3528.217085647" observedRunningTime="2026-01-26 16:32:50.8261074 +0000 UTC m=+3528.607987793" watchObservedRunningTime="2026-01-26 16:32:50.836171068 +0000 UTC m=+3528.618051461" Jan 26 16:33:08 crc kubenswrapper[4896]: I0126 16:33:08.011776 4896 generic.go:334] "Generic (PLEG): container finished" podID="290731fa-7eff-41d9-bba9-b733370ac45b" containerID="835f8bed05b2bcda1d2322dc79b970829aa26a51060707fb62be7ef45a8a0909" exitCode=0 Jan 26 16:33:08 crc kubenswrapper[4896]: I0126 16:33:08.011899 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" event={"ID":"290731fa-7eff-41d9-bba9-b733370ac45b","Type":"ContainerDied","Data":"835f8bed05b2bcda1d2322dc79b970829aa26a51060707fb62be7ef45a8a0909"} Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.483433 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.645878 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0\") pod \"290731fa-7eff-41d9-bba9-b733370ac45b\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.646046 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1\") pod \"290731fa-7eff-41d9-bba9-b733370ac45b\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.646161 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam\") pod \"290731fa-7eff-41d9-bba9-b733370ac45b\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.646610 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory\") pod \"290731fa-7eff-41d9-bba9-b733370ac45b\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.646799 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tbmz\" (UniqueName: \"kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz\") pod \"290731fa-7eff-41d9-bba9-b733370ac45b\" (UID: \"290731fa-7eff-41d9-bba9-b733370ac45b\") " Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.653011 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz" (OuterVolumeSpecName: "kube-api-access-9tbmz") pod "290731fa-7eff-41d9-bba9-b733370ac45b" (UID: "290731fa-7eff-41d9-bba9-b733370ac45b"). InnerVolumeSpecName "kube-api-access-9tbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.687642 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory" (OuterVolumeSpecName: "inventory") pod "290731fa-7eff-41d9-bba9-b733370ac45b" (UID: "290731fa-7eff-41d9-bba9-b733370ac45b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.698984 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "290731fa-7eff-41d9-bba9-b733370ac45b" (UID: "290731fa-7eff-41d9-bba9-b733370ac45b"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.712011 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "290731fa-7eff-41d9-bba9-b733370ac45b" (UID: "290731fa-7eff-41d9-bba9-b733370ac45b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.717564 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "290731fa-7eff-41d9-bba9-b733370ac45b" (UID: "290731fa-7eff-41d9-bba9-b733370ac45b"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.751782 4896 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.751843 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.751866 4896 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.751885 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tbmz\" (UniqueName: \"kubernetes.io/projected/290731fa-7eff-41d9-bba9-b733370ac45b-kube-api-access-9tbmz\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:09 crc kubenswrapper[4896]: I0126 16:33:09.751904 4896 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/290731fa-7eff-41d9-bba9-b733370ac45b-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:10 crc kubenswrapper[4896]: I0126 16:33:10.040263 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" event={"ID":"290731fa-7eff-41d9-bba9-b733370ac45b","Type":"ContainerDied","Data":"ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719"} Jan 26 16:33:10 crc kubenswrapper[4896]: I0126 16:33:10.040471 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca94b02b05334311390e74af03a5484f9230e590ae35863abc758cf61a5ff719" Jan 26 16:33:10 crc kubenswrapper[4896]: I0126 16:33:10.040328 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mfqsf" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.858645 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:30 crc kubenswrapper[4896]: E0126 16:33:30.859943 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290731fa-7eff-41d9-bba9-b733370ac45b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.859963 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="290731fa-7eff-41d9-bba9-b733370ac45b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.860360 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="290731fa-7eff-41d9-bba9-b733370ac45b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.863011 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.887097 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.978397 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.978873 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:30 crc kubenswrapper[4896]: I0126 16:33:30.981015 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88p4s\" (UniqueName: \"kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.083677 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88p4s\" (UniqueName: \"kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.083795 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.083872 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.084355 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.084407 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.103983 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88p4s\" (UniqueName: \"kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s\") pod \"certified-operators-bgbw7\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.203106 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:31 crc kubenswrapper[4896]: I0126 16:33:31.896798 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:32 crc kubenswrapper[4896]: I0126 16:33:32.334359 4896 generic.go:334] "Generic (PLEG): container finished" podID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerID="df28c5a9020418fdf5c43dd2173b122e0c0702bf5e6d551bf6d5056c36ff200f" exitCode=0 Jan 26 16:33:32 crc kubenswrapper[4896]: I0126 16:33:32.334492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerDied","Data":"df28c5a9020418fdf5c43dd2173b122e0c0702bf5e6d551bf6d5056c36ff200f"} Jan 26 16:33:32 crc kubenswrapper[4896]: I0126 16:33:32.334823 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerStarted","Data":"a2fe1928316a0a819e5f3d3f86bb53c467aa6a2792e12fd8bbc97507e11607bf"} Jan 26 16:33:33 crc kubenswrapper[4896]: I0126 16:33:33.348873 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerStarted","Data":"d5f6e3c3e7a34b8a5885b68cab25e0825b3a1678a1928a7a176dcb21f5e8354c"} Jan 26 16:33:34 crc kubenswrapper[4896]: I0126 16:33:34.367806 4896 generic.go:334] "Generic (PLEG): container finished" podID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerID="d5f6e3c3e7a34b8a5885b68cab25e0825b3a1678a1928a7a176dcb21f5e8354c" exitCode=0 Jan 26 16:33:34 crc kubenswrapper[4896]: I0126 16:33:34.367908 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerDied","Data":"d5f6e3c3e7a34b8a5885b68cab25e0825b3a1678a1928a7a176dcb21f5e8354c"} Jan 26 16:33:36 crc kubenswrapper[4896]: I0126 16:33:36.393997 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerStarted","Data":"15c1acb388c9937c87a5bd61d34b4a3fe74cc6e61dd9d56b38e57f98e2b538db"} Jan 26 16:33:36 crc kubenswrapper[4896]: I0126 16:33:36.418008 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bgbw7" podStartSLOduration=3.697337797 podStartE2EDuration="6.417987392s" podCreationTimestamp="2026-01-26 16:33:30 +0000 UTC" firstStartedPulling="2026-01-26 16:33:32.337480846 +0000 UTC m=+3570.119361239" lastFinishedPulling="2026-01-26 16:33:35.058130441 +0000 UTC m=+3572.840010834" observedRunningTime="2026-01-26 16:33:36.413915242 +0000 UTC m=+3574.195795645" watchObservedRunningTime="2026-01-26 16:33:36.417987392 +0000 UTC m=+3574.199867795" Jan 26 16:33:41 crc kubenswrapper[4896]: I0126 16:33:41.204173 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:41 crc kubenswrapper[4896]: I0126 16:33:41.204935 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:41 crc kubenswrapper[4896]: I0126 16:33:41.259991 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:41 crc kubenswrapper[4896]: I0126 16:33:41.527700 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:41 crc kubenswrapper[4896]: I0126 16:33:41.580636 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:43 crc kubenswrapper[4896]: I0126 16:33:43.494658 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bgbw7" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="registry-server" containerID="cri-o://15c1acb388c9937c87a5bd61d34b4a3fe74cc6e61dd9d56b38e57f98e2b538db" gracePeriod=2 Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.517548 4896 generic.go:334] "Generic (PLEG): container finished" podID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerID="15c1acb388c9937c87a5bd61d34b4a3fe74cc6e61dd9d56b38e57f98e2b538db" exitCode=0 Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.517617 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerDied","Data":"15c1acb388c9937c87a5bd61d34b4a3fe74cc6e61dd9d56b38e57f98e2b538db"} Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.644710 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.688466 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88p4s\" (UniqueName: \"kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s\") pod \"3720867a-cd4e-47ed-8cc2-7de1808151ec\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.688639 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities\") pod \"3720867a-cd4e-47ed-8cc2-7de1808151ec\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.688686 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content\") pod \"3720867a-cd4e-47ed-8cc2-7de1808151ec\" (UID: \"3720867a-cd4e-47ed-8cc2-7de1808151ec\") " Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.690811 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities" (OuterVolumeSpecName: "utilities") pod "3720867a-cd4e-47ed-8cc2-7de1808151ec" (UID: "3720867a-cd4e-47ed-8cc2-7de1808151ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.698378 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s" (OuterVolumeSpecName: "kube-api-access-88p4s") pod "3720867a-cd4e-47ed-8cc2-7de1808151ec" (UID: "3720867a-cd4e-47ed-8cc2-7de1808151ec"). InnerVolumeSpecName "kube-api-access-88p4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.745166 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3720867a-cd4e-47ed-8cc2-7de1808151ec" (UID: "3720867a-cd4e-47ed-8cc2-7de1808151ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.791725 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88p4s\" (UniqueName: \"kubernetes.io/projected/3720867a-cd4e-47ed-8cc2-7de1808151ec-kube-api-access-88p4s\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.791759 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:44 crc kubenswrapper[4896]: I0126 16:33:44.791768 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3720867a-cd4e-47ed-8cc2-7de1808151ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.531823 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgbw7" event={"ID":"3720867a-cd4e-47ed-8cc2-7de1808151ec","Type":"ContainerDied","Data":"a2fe1928316a0a819e5f3d3f86bb53c467aa6a2792e12fd8bbc97507e11607bf"} Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.532319 4896 scope.go:117] "RemoveContainer" containerID="15c1acb388c9937c87a5bd61d34b4a3fe74cc6e61dd9d56b38e57f98e2b538db" Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.532514 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgbw7" Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.570149 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.571372 4896 scope.go:117] "RemoveContainer" containerID="d5f6e3c3e7a34b8a5885b68cab25e0825b3a1678a1928a7a176dcb21f5e8354c" Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.582291 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bgbw7"] Jan 26 16:33:45 crc kubenswrapper[4896]: I0126 16:33:45.602844 4896 scope.go:117] "RemoveContainer" containerID="df28c5a9020418fdf5c43dd2173b122e0c0702bf5e6d551bf6d5056c36ff200f" Jan 26 16:33:46 crc kubenswrapper[4896]: I0126 16:33:46.772414 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" path="/var/lib/kubelet/pods/3720867a-cd4e-47ed-8cc2-7de1808151ec/volumes" Jan 26 16:33:48 crc kubenswrapper[4896]: I0126 16:33:48.814223 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:33:48 crc kubenswrapper[4896]: I0126 16:33:48.815738 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:34:18 crc kubenswrapper[4896]: I0126 16:34:18.821340 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:34:18 crc kubenswrapper[4896]: I0126 16:34:18.822101 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:34:48 crc kubenswrapper[4896]: I0126 16:34:48.814068 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:34:48 crc kubenswrapper[4896]: I0126 16:34:48.814843 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:34:48 crc kubenswrapper[4896]: I0126 16:34:48.814916 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:34:48 crc kubenswrapper[4896]: I0126 16:34:48.816059 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:34:48 crc kubenswrapper[4896]: I0126 16:34:48.816146 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817" gracePeriod=600 Jan 26 16:34:49 crc kubenswrapper[4896]: I0126 16:34:49.778677 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817" exitCode=0 Jan 26 16:34:49 crc kubenswrapper[4896]: I0126 16:34:49.778753 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817"} Jan 26 16:34:49 crc kubenswrapper[4896]: I0126 16:34:49.779732 4896 scope.go:117] "RemoveContainer" containerID="cd46ff27f060438e6d2dd96d69cea4f34484f20018f15aef2d0456fb62faa2e1" Jan 26 16:34:51 crc kubenswrapper[4896]: I0126 16:34:51.824632 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36"} Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.854100 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:28 crc kubenswrapper[4896]: E0126 16:36:28.855418 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="extract-content" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.855437 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="extract-content" Jan 26 16:36:28 crc kubenswrapper[4896]: E0126 16:36:28.855468 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="extract-utilities" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.855480 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="extract-utilities" Jan 26 16:36:28 crc kubenswrapper[4896]: E0126 16:36:28.855516 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="registry-server" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.855524 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="registry-server" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.855873 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="3720867a-cd4e-47ed-8cc2-7de1808151ec" containerName="registry-server" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.858413 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.871505 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.936342 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.936724 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9d6x\" (UniqueName: \"kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:28 crc kubenswrapper[4896]: I0126 16:36:28.936963 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.040159 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.040280 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9d6x\" (UniqueName: \"kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.040433 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.040988 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.041085 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.072527 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9d6x\" (UniqueName: \"kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x\") pod \"community-operators-dlxjv\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.197730 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.900431 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:29 crc kubenswrapper[4896]: I0126 16:36:29.974830 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerStarted","Data":"e1bc4d4c1b5c63d8a744a9bb8bd9cfb66b9faec619c3deed40da745127cbda6c"} Jan 26 16:36:30 crc kubenswrapper[4896]: I0126 16:36:30.986751 4896 generic.go:334] "Generic (PLEG): container finished" podID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerID="e063f75d0e8cff66228cee27590eb7c8e0a86b2fcac60ee2561ff96536a1f493" exitCode=0 Jan 26 16:36:30 crc kubenswrapper[4896]: I0126 16:36:30.986815 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerDied","Data":"e063f75d0e8cff66228cee27590eb7c8e0a86b2fcac60ee2561ff96536a1f493"} Jan 26 16:36:30 crc kubenswrapper[4896]: I0126 16:36:30.990237 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:36:33 crc kubenswrapper[4896]: I0126 16:36:33.014904 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerStarted","Data":"cb926068c87e7d4190502683b24322c859634f96978f9eafa36e5980562419f4"} Jan 26 16:36:35 crc kubenswrapper[4896]: I0126 16:36:35.039548 4896 generic.go:334] "Generic (PLEG): container finished" podID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerID="cb926068c87e7d4190502683b24322c859634f96978f9eafa36e5980562419f4" exitCode=0 Jan 26 16:36:35 crc kubenswrapper[4896]: I0126 16:36:35.039670 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerDied","Data":"cb926068c87e7d4190502683b24322c859634f96978f9eafa36e5980562419f4"} Jan 26 16:36:36 crc kubenswrapper[4896]: I0126 16:36:36.054905 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerStarted","Data":"6a2b56f3cf6503dd910eb32fedfbefdc4d8e504d5745e8ef699da9dc77f59145"} Jan 26 16:36:36 crc kubenswrapper[4896]: I0126 16:36:36.077095 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlxjv" podStartSLOduration=3.524741018 podStartE2EDuration="8.077058564s" podCreationTimestamp="2026-01-26 16:36:28 +0000 UTC" firstStartedPulling="2026-01-26 16:36:30.989907722 +0000 UTC m=+3748.771788115" lastFinishedPulling="2026-01-26 16:36:35.542225268 +0000 UTC m=+3753.324105661" observedRunningTime="2026-01-26 16:36:36.074954932 +0000 UTC m=+3753.856835325" watchObservedRunningTime="2026-01-26 16:36:36.077058564 +0000 UTC m=+3753.858938967" Jan 26 16:36:39 crc kubenswrapper[4896]: I0126 16:36:39.198402 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:39 crc kubenswrapper[4896]: I0126 16:36:39.199008 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:40 crc kubenswrapper[4896]: I0126 16:36:40.246095 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dlxjv" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="registry-server" probeResult="failure" output=< Jan 26 16:36:40 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:36:40 crc kubenswrapper[4896]: > Jan 26 16:36:49 crc kubenswrapper[4896]: I0126 16:36:49.258377 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:49 crc kubenswrapper[4896]: I0126 16:36:49.358355 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:49 crc kubenswrapper[4896]: I0126 16:36:49.513145 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:51 crc kubenswrapper[4896]: I0126 16:36:51.230062 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlxjv" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="registry-server" containerID="cri-o://6a2b56f3cf6503dd910eb32fedfbefdc4d8e504d5745e8ef699da9dc77f59145" gracePeriod=2 Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.250170 4896 generic.go:334] "Generic (PLEG): container finished" podID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerID="6a2b56f3cf6503dd910eb32fedfbefdc4d8e504d5745e8ef699da9dc77f59145" exitCode=0 Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.250364 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerDied","Data":"6a2b56f3cf6503dd910eb32fedfbefdc4d8e504d5745e8ef699da9dc77f59145"} Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.576933 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.723550 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities\") pod \"db36d41a-0d0b-410d-9e36-69c96cd29d63\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.724120 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9d6x\" (UniqueName: \"kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x\") pod \"db36d41a-0d0b-410d-9e36-69c96cd29d63\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.724458 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content\") pod \"db36d41a-0d0b-410d-9e36-69c96cd29d63\" (UID: \"db36d41a-0d0b-410d-9e36-69c96cd29d63\") " Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.724552 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities" (OuterVolumeSpecName: "utilities") pod "db36d41a-0d0b-410d-9e36-69c96cd29d63" (UID: "db36d41a-0d0b-410d-9e36-69c96cd29d63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.725388 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.732984 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x" (OuterVolumeSpecName: "kube-api-access-h9d6x") pod "db36d41a-0d0b-410d-9e36-69c96cd29d63" (UID: "db36d41a-0d0b-410d-9e36-69c96cd29d63"). InnerVolumeSpecName "kube-api-access-h9d6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.802441 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db36d41a-0d0b-410d-9e36-69c96cd29d63" (UID: "db36d41a-0d0b-410d-9e36-69c96cd29d63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.828372 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9d6x\" (UniqueName: \"kubernetes.io/projected/db36d41a-0d0b-410d-9e36-69c96cd29d63-kube-api-access-h9d6x\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:52 crc kubenswrapper[4896]: I0126 16:36:52.828423 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db36d41a-0d0b-410d-9e36-69c96cd29d63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.280291 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlxjv" event={"ID":"db36d41a-0d0b-410d-9e36-69c96cd29d63","Type":"ContainerDied","Data":"e1bc4d4c1b5c63d8a744a9bb8bd9cfb66b9faec619c3deed40da745127cbda6c"} Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.280359 4896 scope.go:117] "RemoveContainer" containerID="6a2b56f3cf6503dd910eb32fedfbefdc4d8e504d5745e8ef699da9dc77f59145" Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.280374 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlxjv" Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.317565 4896 scope.go:117] "RemoveContainer" containerID="cb926068c87e7d4190502683b24322c859634f96978f9eafa36e5980562419f4" Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.332450 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.344203 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlxjv"] Jan 26 16:36:53 crc kubenswrapper[4896]: I0126 16:36:53.367531 4896 scope.go:117] "RemoveContainer" containerID="e063f75d0e8cff66228cee27590eb7c8e0a86b2fcac60ee2561ff96536a1f493" Jan 26 16:36:54 crc kubenswrapper[4896]: I0126 16:36:54.781986 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" path="/var/lib/kubelet/pods/db36d41a-0d0b-410d-9e36-69c96cd29d63/volumes" Jan 26 16:37:18 crc kubenswrapper[4896]: I0126 16:37:18.814109 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:37:18 crc kubenswrapper[4896]: I0126 16:37:18.814757 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:37:48 crc kubenswrapper[4896]: I0126 16:37:48.813441 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:37:48 crc kubenswrapper[4896]: I0126 16:37:48.814114 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:38:18 crc kubenswrapper[4896]: I0126 16:38:18.814146 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:38:18 crc kubenswrapper[4896]: I0126 16:38:18.814808 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:38:18 crc kubenswrapper[4896]: I0126 16:38:18.814880 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:38:18 crc kubenswrapper[4896]: I0126 16:38:18.815942 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:38:18 crc kubenswrapper[4896]: I0126 16:38:18.816018 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" gracePeriod=600 Jan 26 16:38:19 crc kubenswrapper[4896]: E0126 16:38:19.099164 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:38:19 crc kubenswrapper[4896]: I0126 16:38:19.542361 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" exitCode=0 Jan 26 16:38:19 crc kubenswrapper[4896]: I0126 16:38:19.542417 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36"} Jan 26 16:38:19 crc kubenswrapper[4896]: I0126 16:38:19.542483 4896 scope.go:117] "RemoveContainer" containerID="38fa9a1a124d1764d8ccc52dbe3892feb90788eed86f3ade09809959e801a817" Jan 26 16:38:19 crc kubenswrapper[4896]: I0126 16:38:19.543469 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:38:19 crc kubenswrapper[4896]: E0126 16:38:19.543957 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:38:32 crc kubenswrapper[4896]: I0126 16:38:32.759300 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:38:32 crc kubenswrapper[4896]: E0126 16:38:32.760026 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:38:47 crc kubenswrapper[4896]: I0126 16:38:47.759613 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:38:47 crc kubenswrapper[4896]: E0126 16:38:47.760659 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:38:59 crc kubenswrapper[4896]: I0126 16:38:59.759617 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:38:59 crc kubenswrapper[4896]: E0126 16:38:59.760344 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:39:13 crc kubenswrapper[4896]: I0126 16:39:13.761136 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:39:13 crc kubenswrapper[4896]: E0126 16:39:13.761969 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:39:26 crc kubenswrapper[4896]: I0126 16:39:26.759517 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:39:26 crc kubenswrapper[4896]: E0126 16:39:26.760375 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:39:37 crc kubenswrapper[4896]: I0126 16:39:37.761037 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:39:37 crc kubenswrapper[4896]: E0126 16:39:37.761847 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:39:50 crc kubenswrapper[4896]: I0126 16:39:50.760588 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:39:50 crc kubenswrapper[4896]: E0126 16:39:50.761307 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:40:02 crc kubenswrapper[4896]: I0126 16:40:02.758928 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:40:02 crc kubenswrapper[4896]: E0126 16:40:02.759737 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:40:15 crc kubenswrapper[4896]: I0126 16:40:15.760213 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:40:15 crc kubenswrapper[4896]: E0126 16:40:15.761032 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:40:30 crc kubenswrapper[4896]: I0126 16:40:30.760763 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:40:30 crc kubenswrapper[4896]: E0126 16:40:30.761723 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:40:42 crc kubenswrapper[4896]: I0126 16:40:42.900894 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:40:42 crc kubenswrapper[4896]: E0126 16:40:42.901795 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:40:55 crc kubenswrapper[4896]: I0126 16:40:55.759395 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:40:55 crc kubenswrapper[4896]: E0126 16:40:55.760259 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:41:10 crc kubenswrapper[4896]: I0126 16:41:10.759976 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:41:10 crc kubenswrapper[4896]: E0126 16:41:10.760906 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.275979 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:17 crc kubenswrapper[4896]: E0126 16:41:17.277217 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="extract-content" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.277236 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="extract-content" Jan 26 16:41:17 crc kubenswrapper[4896]: E0126 16:41:17.277294 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="extract-utilities" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.277303 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="extract-utilities" Jan 26 16:41:17 crc kubenswrapper[4896]: E0126 16:41:17.277319 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="registry-server" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.279374 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="registry-server" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.279942 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="db36d41a-0d0b-410d-9e36-69c96cd29d63" containerName="registry-server" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.282527 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.295199 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.443767 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.443921 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbfc\" (UniqueName: \"kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.443976 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.546439 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbfc\" (UniqueName: \"kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.546557 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.546738 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.547344 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:17 crc kubenswrapper[4896]: I0126 16:41:17.547477 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:21 crc kubenswrapper[4896]: I0126 16:41:21.294938 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbfc\" (UniqueName: \"kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc\") pod \"redhat-operators-tn49l\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:21 crc kubenswrapper[4896]: I0126 16:41:21.511441 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:22 crc kubenswrapper[4896]: I0126 16:41:22.353031 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:22 crc kubenswrapper[4896]: I0126 16:41:22.442425 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerStarted","Data":"7e11417accbf555bb93a5488273e137d8036d7401409d640c9372f296d4b6dab"} Jan 26 16:41:23 crc kubenswrapper[4896]: I0126 16:41:23.453345 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerStarted","Data":"cc68caf6b349fb5003f855e8b14dc905439aa0b934054de55ae6948a35e24ecf"} Jan 26 16:41:23 crc kubenswrapper[4896]: I0126 16:41:23.760254 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:41:23 crc kubenswrapper[4896]: E0126 16:41:23.760697 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:41:24 crc kubenswrapper[4896]: I0126 16:41:24.466769 4896 generic.go:334] "Generic (PLEG): container finished" podID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerID="cc68caf6b349fb5003f855e8b14dc905439aa0b934054de55ae6948a35e24ecf" exitCode=0 Jan 26 16:41:24 crc kubenswrapper[4896]: I0126 16:41:24.466877 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerDied","Data":"cc68caf6b349fb5003f855e8b14dc905439aa0b934054de55ae6948a35e24ecf"} Jan 26 16:41:26 crc kubenswrapper[4896]: I0126 16:41:26.500742 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerStarted","Data":"c9d6a6b49489effdbcc0ef1a27a183a8679254b94bc823cd07fd4d5b675c7ad1"} Jan 26 16:41:31 crc kubenswrapper[4896]: I0126 16:41:31.759976 4896 generic.go:334] "Generic (PLEG): container finished" podID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerID="c9d6a6b49489effdbcc0ef1a27a183a8679254b94bc823cd07fd4d5b675c7ad1" exitCode=0 Jan 26 16:41:31 crc kubenswrapper[4896]: I0126 16:41:31.760025 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerDied","Data":"c9d6a6b49489effdbcc0ef1a27a183a8679254b94bc823cd07fd4d5b675c7ad1"} Jan 26 16:41:31 crc kubenswrapper[4896]: I0126 16:41:31.762690 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:41:33 crc kubenswrapper[4896]: I0126 16:41:33.890536 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerStarted","Data":"919f1ab58de118b7844e6f378ef44971c071387f88c4061b484b2a3e9b0bf394"} Jan 26 16:41:33 crc kubenswrapper[4896]: I0126 16:41:33.938630 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tn49l" podStartSLOduration=8.840834051 podStartE2EDuration="16.938594901s" podCreationTimestamp="2026-01-26 16:41:17 +0000 UTC" firstStartedPulling="2026-01-26 16:41:24.469851017 +0000 UTC m=+4042.251731450" lastFinishedPulling="2026-01-26 16:41:32.567611907 +0000 UTC m=+4050.349492300" observedRunningTime="2026-01-26 16:41:33.927835498 +0000 UTC m=+4051.709715901" watchObservedRunningTime="2026-01-26 16:41:33.938594901 +0000 UTC m=+4051.720475294" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.350335 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.356204 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.374622 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.533086 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfqk\" (UniqueName: \"kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.533299 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.533652 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.636352 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.636540 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.636807 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfqk\" (UniqueName: \"kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.637819 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.638068 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.670058 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfqk\" (UniqueName: \"kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk\") pod \"redhat-marketplace-prp4g\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:37 crc kubenswrapper[4896]: I0126 16:41:37.692234 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:38 crc kubenswrapper[4896]: I0126 16:41:38.401368 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:38 crc kubenswrapper[4896]: I0126 16:41:38.759867 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:41:38 crc kubenswrapper[4896]: E0126 16:41:38.760372 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:41:39 crc kubenswrapper[4896]: I0126 16:41:39.108832 4896 generic.go:334] "Generic (PLEG): container finished" podID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerID="79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee" exitCode=0 Jan 26 16:41:39 crc kubenswrapper[4896]: I0126 16:41:39.108875 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerDied","Data":"79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee"} Jan 26 16:41:39 crc kubenswrapper[4896]: I0126 16:41:39.109134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerStarted","Data":"125610a0d19bca80289ea64c0e45835c31f8e9d30e7e9fa2161e5f8e9ec0ddee"} Jan 26 16:41:40 crc kubenswrapper[4896]: I0126 16:41:40.128424 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerStarted","Data":"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8"} Jan 26 16:41:40 crc kubenswrapper[4896]: E0126 16:41:40.470063 4896 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.154:57138->38.102.83.154:40761: read tcp 38.102.83.154:57138->38.102.83.154:40761: read: connection reset by peer Jan 26 16:41:41 crc kubenswrapper[4896]: I0126 16:41:41.142398 4896 generic.go:334] "Generic (PLEG): container finished" podID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerID="45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8" exitCode=0 Jan 26 16:41:41 crc kubenswrapper[4896]: I0126 16:41:41.142464 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerDied","Data":"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8"} Jan 26 16:41:41 crc kubenswrapper[4896]: I0126 16:41:41.512298 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:41 crc kubenswrapper[4896]: I0126 16:41:41.512664 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:42 crc kubenswrapper[4896]: I0126 16:41:42.155540 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerStarted","Data":"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36"} Jan 26 16:41:42 crc kubenswrapper[4896]: I0126 16:41:42.176609 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-prp4g" podStartSLOduration=2.566861043 podStartE2EDuration="5.176566192s" podCreationTimestamp="2026-01-26 16:41:37 +0000 UTC" firstStartedPulling="2026-01-26 16:41:39.111326848 +0000 UTC m=+4056.893207241" lastFinishedPulling="2026-01-26 16:41:41.721032007 +0000 UTC m=+4059.502912390" observedRunningTime="2026-01-26 16:41:42.173511388 +0000 UTC m=+4059.955391791" watchObservedRunningTime="2026-01-26 16:41:42.176566192 +0000 UTC m=+4059.958446595" Jan 26 16:41:42 crc kubenswrapper[4896]: I0126 16:41:42.583067 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn49l" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="registry-server" probeResult="failure" output=< Jan 26 16:41:42 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:41:42 crc kubenswrapper[4896]: > Jan 26 16:41:47 crc kubenswrapper[4896]: I0126 16:41:47.693309 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:47 crc kubenswrapper[4896]: I0126 16:41:47.693695 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:47 crc kubenswrapper[4896]: I0126 16:41:47.793547 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:48 crc kubenswrapper[4896]: I0126 16:41:48.273532 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:48 crc kubenswrapper[4896]: I0126 16:41:48.471157 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:50 crc kubenswrapper[4896]: I0126 16:41:50.246082 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-prp4g" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="registry-server" containerID="cri-o://fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36" gracePeriod=2 Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.119238 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.262203 4896 generic.go:334] "Generic (PLEG): container finished" podID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerID="fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36" exitCode=0 Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.262251 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerDied","Data":"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36"} Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.262284 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prp4g" event={"ID":"414f1448-73e6-4235-bafa-c5c11a8c202c","Type":"ContainerDied","Data":"125610a0d19bca80289ea64c0e45835c31f8e9d30e7e9fa2161e5f8e9ec0ddee"} Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.262304 4896 scope.go:117] "RemoveContainer" containerID="fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.263522 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prp4g" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.288812 4896 scope.go:117] "RemoveContainer" containerID="45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.318366 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities\") pod \"414f1448-73e6-4235-bafa-c5c11a8c202c\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.319125 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content\") pod \"414f1448-73e6-4235-bafa-c5c11a8c202c\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.319322 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkfqk\" (UniqueName: \"kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk\") pod \"414f1448-73e6-4235-bafa-c5c11a8c202c\" (UID: \"414f1448-73e6-4235-bafa-c5c11a8c202c\") " Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.319384 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities" (OuterVolumeSpecName: "utilities") pod "414f1448-73e6-4235-bafa-c5c11a8c202c" (UID: "414f1448-73e6-4235-bafa-c5c11a8c202c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.320727 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.328625 4896 scope.go:117] "RemoveContainer" containerID="79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.328829 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk" (OuterVolumeSpecName: "kube-api-access-wkfqk") pod "414f1448-73e6-4235-bafa-c5c11a8c202c" (UID: "414f1448-73e6-4235-bafa-c5c11a8c202c"). InnerVolumeSpecName "kube-api-access-wkfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.344094 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "414f1448-73e6-4235-bafa-c5c11a8c202c" (UID: "414f1448-73e6-4235-bafa-c5c11a8c202c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.422817 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkfqk\" (UniqueName: \"kubernetes.io/projected/414f1448-73e6-4235-bafa-c5c11a8c202c-kube-api-access-wkfqk\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.422880 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/414f1448-73e6-4235-bafa-c5c11a8c202c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.439671 4896 scope.go:117] "RemoveContainer" containerID="fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36" Jan 26 16:41:51 crc kubenswrapper[4896]: E0126 16:41:51.440433 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36\": container with ID starting with fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36 not found: ID does not exist" containerID="fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.440898 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36"} err="failed to get container status \"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36\": rpc error: code = NotFound desc = could not find container \"fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36\": container with ID starting with fe055c72a72aefc47b625033185d7440b7c95b39db40296db5705b0850be6a36 not found: ID does not exist" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.441097 4896 scope.go:117] "RemoveContainer" containerID="45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8" Jan 26 16:41:51 crc kubenswrapper[4896]: E0126 16:41:51.442004 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8\": container with ID starting with 45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8 not found: ID does not exist" containerID="45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.442049 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8"} err="failed to get container status \"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8\": rpc error: code = NotFound desc = could not find container \"45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8\": container with ID starting with 45f9ca99317ec163b75999f7efa56e773ef0f979bc0d89e07a3755765aadc1a8 not found: ID does not exist" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.442080 4896 scope.go:117] "RemoveContainer" containerID="79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee" Jan 26 16:41:51 crc kubenswrapper[4896]: E0126 16:41:51.442408 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee\": container with ID starting with 79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee not found: ID does not exist" containerID="79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.442481 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee"} err="failed to get container status \"79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee\": rpc error: code = NotFound desc = could not find container \"79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee\": container with ID starting with 79027344e6618fe8104d006c4882a01b1b2dae851074b68c9208e3d9e8344cee not found: ID does not exist" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.570380 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.637288 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.639423 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:51 crc kubenswrapper[4896]: I0126 16:41:51.650138 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-prp4g"] Jan 26 16:41:52 crc kubenswrapper[4896]: I0126 16:41:52.277933 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:52 crc kubenswrapper[4896]: I0126 16:41:52.772436 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:41:52 crc kubenswrapper[4896]: E0126 16:41:52.773123 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:41:52 crc kubenswrapper[4896]: I0126 16:41:52.774908 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" path="/var/lib/kubelet/pods/414f1448-73e6-4235-bafa-c5c11a8c202c/volumes" Jan 26 16:41:53 crc kubenswrapper[4896]: I0126 16:41:53.292447 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tn49l" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="registry-server" containerID="cri-o://919f1ab58de118b7844e6f378ef44971c071387f88c4061b484b2a3e9b0bf394" gracePeriod=2 Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.308990 4896 generic.go:334] "Generic (PLEG): container finished" podID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerID="919f1ab58de118b7844e6f378ef44971c071387f88c4061b484b2a3e9b0bf394" exitCode=0 Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.309076 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerDied","Data":"919f1ab58de118b7844e6f378ef44971c071387f88c4061b484b2a3e9b0bf394"} Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.752012 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.929547 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content\") pod \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.929921 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities\") pod \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.929969 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhbfc\" (UniqueName: \"kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc\") pod \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\" (UID: \"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07\") " Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.930467 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities" (OuterVolumeSpecName: "utilities") pod "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" (UID: "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.931447 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:54 crc kubenswrapper[4896]: I0126 16:41:54.949417 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc" (OuterVolumeSpecName: "kube-api-access-mhbfc") pod "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" (UID: "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07"). InnerVolumeSpecName "kube-api-access-mhbfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.034168 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhbfc\" (UniqueName: \"kubernetes.io/projected/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-kube-api-access-mhbfc\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.066431 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" (UID: "0f573ddf-9cc6-45e7-93c1-323ba3d5cc07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.136313 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.321329 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn49l" event={"ID":"0f573ddf-9cc6-45e7-93c1-323ba3d5cc07","Type":"ContainerDied","Data":"7e11417accbf555bb93a5488273e137d8036d7401409d640c9372f296d4b6dab"} Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.321385 4896 scope.go:117] "RemoveContainer" containerID="919f1ab58de118b7844e6f378ef44971c071387f88c4061b484b2a3e9b0bf394" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.321533 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn49l" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.357635 4896 scope.go:117] "RemoveContainer" containerID="c9d6a6b49489effdbcc0ef1a27a183a8679254b94bc823cd07fd4d5b675c7ad1" Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.360455 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.372505 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tn49l"] Jan 26 16:41:55 crc kubenswrapper[4896]: I0126 16:41:55.395185 4896 scope.go:117] "RemoveContainer" containerID="cc68caf6b349fb5003f855e8b14dc905439aa0b934054de55ae6948a35e24ecf" Jan 26 16:41:56 crc kubenswrapper[4896]: I0126 16:41:56.772855 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" path="/var/lib/kubelet/pods/0f573ddf-9cc6-45e7-93c1-323ba3d5cc07/volumes" Jan 26 16:42:06 crc kubenswrapper[4896]: I0126 16:42:06.759686 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:42:06 crc kubenswrapper[4896]: E0126 16:42:06.760740 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:42:20 crc kubenswrapper[4896]: I0126 16:42:20.759696 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:42:20 crc kubenswrapper[4896]: E0126 16:42:20.760449 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:42:31 crc kubenswrapper[4896]: I0126 16:42:31.759435 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:42:31 crc kubenswrapper[4896]: E0126 16:42:31.760344 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:42:45 crc kubenswrapper[4896]: I0126 16:42:45.760971 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:42:45 crc kubenswrapper[4896]: E0126 16:42:45.761864 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:42:57 crc kubenswrapper[4896]: I0126 16:42:57.759917 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:42:57 crc kubenswrapper[4896]: E0126 16:42:57.760808 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:43:10 crc kubenswrapper[4896]: I0126 16:43:10.760069 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:43:10 crc kubenswrapper[4896]: E0126 16:43:10.760946 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:43:23 crc kubenswrapper[4896]: I0126 16:43:23.759815 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:43:24 crc kubenswrapper[4896]: I0126 16:43:24.477262 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70"} Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.065322 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066653 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066671 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066695 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="extract-content" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066704 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="extract-content" Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066724 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="extract-utilities" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066734 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="extract-utilities" Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066757 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066765 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066780 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="extract-utilities" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066788 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="extract-utilities" Jan 26 16:44:21 crc kubenswrapper[4896]: E0126 16:44:21.066805 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="extract-content" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.066813 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="extract-content" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.067138 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="414f1448-73e6-4235-bafa-c5c11a8c202c" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.067156 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f573ddf-9cc6-45e7-93c1-323ba3d5cc07" containerName="registry-server" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.069686 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.093543 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.239191 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9p67\" (UniqueName: \"kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.239588 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.239630 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.341673 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9p67\" (UniqueName: \"kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.341769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.341798 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.342398 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.342522 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.366761 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9p67\" (UniqueName: \"kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67\") pod \"certified-operators-l2jxn\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:21 crc kubenswrapper[4896]: I0126 16:44:21.406085 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:22 crc kubenswrapper[4896]: I0126 16:44:22.065787 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:22 crc kubenswrapper[4896]: W0126 16:44:22.070877 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod086fdfb8_2544_426e_a7c8_1959ad9ac64c.slice/crio-a20210c9ca0ddfed54db0bed92b2f614e20e1775cabc6dee76e201b853b7a95d WatchSource:0}: Error finding container a20210c9ca0ddfed54db0bed92b2f614e20e1775cabc6dee76e201b853b7a95d: Status 404 returned error can't find the container with id a20210c9ca0ddfed54db0bed92b2f614e20e1775cabc6dee76e201b853b7a95d Jan 26 16:44:22 crc kubenswrapper[4896]: I0126 16:44:22.467104 4896 generic.go:334] "Generic (PLEG): container finished" podID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerID="be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc" exitCode=0 Jan 26 16:44:22 crc kubenswrapper[4896]: I0126 16:44:22.467382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerDied","Data":"be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc"} Jan 26 16:44:22 crc kubenswrapper[4896]: I0126 16:44:22.467407 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerStarted","Data":"a20210c9ca0ddfed54db0bed92b2f614e20e1775cabc6dee76e201b853b7a95d"} Jan 26 16:44:23 crc kubenswrapper[4896]: I0126 16:44:23.481310 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerStarted","Data":"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8"} Jan 26 16:44:24 crc kubenswrapper[4896]: I0126 16:44:24.494327 4896 generic.go:334] "Generic (PLEG): container finished" podID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerID="f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8" exitCode=0 Jan 26 16:44:24 crc kubenswrapper[4896]: I0126 16:44:24.494427 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerDied","Data":"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8"} Jan 26 16:44:30 crc kubenswrapper[4896]: I0126 16:44:30.907783 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerStarted","Data":"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e"} Jan 26 16:44:30 crc kubenswrapper[4896]: I0126 16:44:30.933366 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2jxn" podStartSLOduration=6.992844926 podStartE2EDuration="9.933332014s" podCreationTimestamp="2026-01-26 16:44:21 +0000 UTC" firstStartedPulling="2026-01-26 16:44:22.469561616 +0000 UTC m=+4220.251442009" lastFinishedPulling="2026-01-26 16:44:25.410048684 +0000 UTC m=+4223.191929097" observedRunningTime="2026-01-26 16:44:30.927760658 +0000 UTC m=+4228.709641081" watchObservedRunningTime="2026-01-26 16:44:30.933332014 +0000 UTC m=+4228.715212397" Jan 26 16:44:31 crc kubenswrapper[4896]: I0126 16:44:31.406866 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:31 crc kubenswrapper[4896]: I0126 16:44:31.407941 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:32 crc kubenswrapper[4896]: I0126 16:44:32.942883 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l2jxn" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="registry-server" probeResult="failure" output=< Jan 26 16:44:32 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:44:32 crc kubenswrapper[4896]: > Jan 26 16:44:41 crc kubenswrapper[4896]: I0126 16:44:41.472167 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:41 crc kubenswrapper[4896]: I0126 16:44:41.524765 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:41 crc kubenswrapper[4896]: I0126 16:44:41.714331 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.060126 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l2jxn" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="registry-server" containerID="cri-o://1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e" gracePeriod=2 Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.655999 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.730182 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content\") pod \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.730390 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9p67\" (UniqueName: \"kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67\") pod \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.730546 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities\") pod \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\" (UID: \"086fdfb8-2544-426e-a7c8-1959ad9ac64c\") " Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.731942 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities" (OuterVolumeSpecName: "utilities") pod "086fdfb8-2544-426e-a7c8-1959ad9ac64c" (UID: "086fdfb8-2544-426e-a7c8-1959ad9ac64c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.739861 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67" (OuterVolumeSpecName: "kube-api-access-l9p67") pod "086fdfb8-2544-426e-a7c8-1959ad9ac64c" (UID: "086fdfb8-2544-426e-a7c8-1959ad9ac64c"). InnerVolumeSpecName "kube-api-access-l9p67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.806540 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "086fdfb8-2544-426e-a7c8-1959ad9ac64c" (UID: "086fdfb8-2544-426e-a7c8-1959ad9ac64c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.834247 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.834312 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9p67\" (UniqueName: \"kubernetes.io/projected/086fdfb8-2544-426e-a7c8-1959ad9ac64c-kube-api-access-l9p67\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:43 crc kubenswrapper[4896]: I0126 16:44:43.834328 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086fdfb8-2544-426e-a7c8-1959ad9ac64c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.073062 4896 generic.go:334] "Generic (PLEG): container finished" podID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerID="1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e" exitCode=0 Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.073105 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerDied","Data":"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e"} Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.073203 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2jxn" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.073252 4896 scope.go:117] "RemoveContainer" containerID="1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.075683 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2jxn" event={"ID":"086fdfb8-2544-426e-a7c8-1959ad9ac64c","Type":"ContainerDied","Data":"a20210c9ca0ddfed54db0bed92b2f614e20e1775cabc6dee76e201b853b7a95d"} Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.107038 4896 scope.go:117] "RemoveContainer" containerID="f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.131856 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.145877 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l2jxn"] Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.688875 4896 scope.go:117] "RemoveContainer" containerID="be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.750506 4896 scope.go:117] "RemoveContainer" containerID="1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e" Jan 26 16:44:44 crc kubenswrapper[4896]: E0126 16:44:44.751273 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e\": container with ID starting with 1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e not found: ID does not exist" containerID="1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.751331 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e"} err="failed to get container status \"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e\": rpc error: code = NotFound desc = could not find container \"1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e\": container with ID starting with 1a4f4208f40ab84c011de63b4413ce7b86037b3594648e79f402d453787c851e not found: ID does not exist" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.751360 4896 scope.go:117] "RemoveContainer" containerID="f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8" Jan 26 16:44:44 crc kubenswrapper[4896]: E0126 16:44:44.752933 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8\": container with ID starting with f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8 not found: ID does not exist" containerID="f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.752971 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8"} err="failed to get container status \"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8\": rpc error: code = NotFound desc = could not find container \"f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8\": container with ID starting with f0d7bcae97d5a7ec5fe797a160ff07d83166e8d1b62c0521195d3daf2044c9b8 not found: ID does not exist" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.752986 4896 scope.go:117] "RemoveContainer" containerID="be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc" Jan 26 16:44:44 crc kubenswrapper[4896]: E0126 16:44:44.753631 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc\": container with ID starting with be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc not found: ID does not exist" containerID="be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.753788 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc"} err="failed to get container status \"be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc\": rpc error: code = NotFound desc = could not find container \"be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc\": container with ID starting with be5defb1f46964d549089cdbcf19c4034444ccce436e94a718ef13029b663ccc not found: ID does not exist" Jan 26 16:44:44 crc kubenswrapper[4896]: I0126 16:44:44.773952 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" path="/var/lib/kubelet/pods/086fdfb8-2544-426e-a7c8-1959ad9ac64c/volumes" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.171209 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v"] Jan 26 16:45:00 crc kubenswrapper[4896]: E0126 16:45:00.172358 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.172374 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4896]: E0126 16:45:00.172409 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.172416 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4896]: E0126 16:45:00.172434 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.172441 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.172819 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="086fdfb8-2544-426e-a7c8-1959ad9ac64c" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.173721 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.177810 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.177848 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.182637 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v"] Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.219524 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.219664 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.219760 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8t8\" (UniqueName: \"kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.322184 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.322271 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.322355 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8t8\" (UniqueName: \"kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.323717 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.333815 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.339557 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8t8\" (UniqueName: \"kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8\") pod \"collect-profiles-29490765-bwv4v\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:00 crc kubenswrapper[4896]: I0126 16:45:00.514609 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:01 crc kubenswrapper[4896]: I0126 16:45:01.000727 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v"] Jan 26 16:45:01 crc kubenswrapper[4896]: I0126 16:45:01.293097 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" event={"ID":"bec655b4-80c3-4368-8077-13b6c2a5294b","Type":"ContainerStarted","Data":"33f8f4dba38734ff04ec28eaedb778f781cef94a58b5bb582a5697f61f718cdf"} Jan 26 16:45:01 crc kubenswrapper[4896]: I0126 16:45:01.293509 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" event={"ID":"bec655b4-80c3-4368-8077-13b6c2a5294b","Type":"ContainerStarted","Data":"f139eae4b9a1c9005340ee96045d9455736247be803ab573135075b63fb0b82e"} Jan 26 16:45:01 crc kubenswrapper[4896]: I0126 16:45:01.321796 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" podStartSLOduration=1.321772004 podStartE2EDuration="1.321772004s" podCreationTimestamp="2026-01-26 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:45:01.311231397 +0000 UTC m=+4259.093111800" watchObservedRunningTime="2026-01-26 16:45:01.321772004 +0000 UTC m=+4259.103652397" Jan 26 16:45:02 crc kubenswrapper[4896]: I0126 16:45:02.308044 4896 generic.go:334] "Generic (PLEG): container finished" podID="bec655b4-80c3-4368-8077-13b6c2a5294b" containerID="33f8f4dba38734ff04ec28eaedb778f781cef94a58b5bb582a5697f61f718cdf" exitCode=0 Jan 26 16:45:02 crc kubenswrapper[4896]: I0126 16:45:02.308130 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" event={"ID":"bec655b4-80c3-4368-8077-13b6c2a5294b","Type":"ContainerDied","Data":"33f8f4dba38734ff04ec28eaedb778f781cef94a58b5bb582a5697f61f718cdf"} Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.811230 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.924644 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume\") pod \"bec655b4-80c3-4368-8077-13b6c2a5294b\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.925048 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb8t8\" (UniqueName: \"kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8\") pod \"bec655b4-80c3-4368-8077-13b6c2a5294b\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.925348 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume\") pod \"bec655b4-80c3-4368-8077-13b6c2a5294b\" (UID: \"bec655b4-80c3-4368-8077-13b6c2a5294b\") " Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.925503 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume" (OuterVolumeSpecName: "config-volume") pod "bec655b4-80c3-4368-8077-13b6c2a5294b" (UID: "bec655b4-80c3-4368-8077-13b6c2a5294b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.926346 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec655b4-80c3-4368-8077-13b6c2a5294b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.931348 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bec655b4-80c3-4368-8077-13b6c2a5294b" (UID: "bec655b4-80c3-4368-8077-13b6c2a5294b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:45:03 crc kubenswrapper[4896]: I0126 16:45:03.934916 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8" (OuterVolumeSpecName: "kube-api-access-nb8t8") pod "bec655b4-80c3-4368-8077-13b6c2a5294b" (UID: "bec655b4-80c3-4368-8077-13b6c2a5294b"). InnerVolumeSpecName "kube-api-access-nb8t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.029667 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb8t8\" (UniqueName: \"kubernetes.io/projected/bec655b4-80c3-4368-8077-13b6c2a5294b-kube-api-access-nb8t8\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.029705 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec655b4-80c3-4368-8077-13b6c2a5294b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.329149 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" event={"ID":"bec655b4-80c3-4368-8077-13b6c2a5294b","Type":"ContainerDied","Data":"f139eae4b9a1c9005340ee96045d9455736247be803ab573135075b63fb0b82e"} Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.329189 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f139eae4b9a1c9005340ee96045d9455736247be803ab573135075b63fb0b82e" Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.329214 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v" Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.406937 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl"] Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.419297 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-2dvdl"] Jan 26 16:45:04 crc kubenswrapper[4896]: I0126 16:45:04.773072 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb" path="/var/lib/kubelet/pods/8e7ec9d7-04ae-4fb1-8f4f-652c714ee4eb/volumes" Jan 26 16:45:42 crc kubenswrapper[4896]: I0126 16:45:42.626638 4896 scope.go:117] "RemoveContainer" containerID="85468301d7e2946a5d33f0a2bbfdcd2e62fcb6abb066da3309e8786305429542" Jan 26 16:45:48 crc kubenswrapper[4896]: I0126 16:45:48.813480 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:45:48 crc kubenswrapper[4896]: I0126 16:45:48.814123 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:46:14 crc kubenswrapper[4896]: I0126 16:46:14.780127 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 26 16:46:14 crc kubenswrapper[4896]: I0126 16:46:14.780249 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 16:47:21 crc kubenswrapper[4896]: I0126 16:47:21.291043 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 16:47:21 crc kubenswrapper[4896]: E0126 16:47:21.316981 4896 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:50404->38.102.83.154:40761: write tcp 38.102.83.154:50404->38.102.83.154:40761: write: broken pipe Jan 26 16:47:21 crc kubenswrapper[4896]: I0126 16:47:21.392862 4896 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw2tr container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:21 crc kubenswrapper[4896]: I0126 16:47:21.393143 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podUID="3a8b421e-f755-4bc6-89f7-03aa4a309a87" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:21 crc kubenswrapper[4896]: I0126 16:47:21.393737 4896 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw2tr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:21 crc kubenswrapper[4896]: I0126 16:47:21.394433 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podUID="3a8b421e-f755-4bc6-89f7-03aa4a309a87" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:21 crc kubenswrapper[4896]: E0126 16:47:21.816793 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 26 16:47:24 crc kubenswrapper[4896]: E0126 16:47:21.924203 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.122848 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" podUID="fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": dial tcp 10.217.0.108:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: E0126 16:47:22.137321 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.175733 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": dial tcp 10.217.0.114:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.176096 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" podUID="fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": dial tcp 10.217.0.108:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.178528 4896 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.178568 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.180002 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": dial tcp 10.217.0.114:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.222899 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": read tcp 10.217.0.2:34506->10.217.0.113:8081: read: connection reset by peer" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.223262 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": read tcp 10.217.0.2:34490->10.217.0.113:8081: read: connection reset by peer" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.235894 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.235983 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" podUID="61be8fa4-3ad2-4745-88ab-850db16c5707" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.236406 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": read tcp 10.217.0.2:56964->10.217.0.111:8081: read: connection reset by peer" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.248159 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": dial tcp 10.217.0.116:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.248473 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": dial tcp 10.217.0.114:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.249100 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": dial tcp 10.217.0.111:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.249320 4896 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.249355 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.250198 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": dial tcp 10.217.0.111:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.253686 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.253728 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.254450 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": dial tcp 10.217.0.118:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.256539 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": dial tcp 10.217.0.118:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.257176 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" podUID="f6fe08af-0b15-4be3-8473-6a983d21ebe3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": dial tcp 10.217.0.103:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.257566 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" podUID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": dial tcp 10.217.0.114:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.257647 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" podUID="16c521f5-6f5f-43e3-a670-9f6ab6312d9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.257907 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": dial tcp 10.217.0.116:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.258127 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": dial tcp 10.217.0.118:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.258717 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" podUID="3eac11e1-3f7e-467c-b7f7-038d29e23848" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": dial tcp 10.217.0.111:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.262794 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": dial tcp 10.217.0.119:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.264402 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.264428 4896 patch_prober.go:28] interesting pod/metrics-server-b94dd49c-f92bj container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 16:47:24 crc kubenswrapper[4896]: [+]log ok Jan 26 16:47:24 crc kubenswrapper[4896]: [+]poststarthook/max-in-flight-filter ok Jan 26 16:47:24 crc kubenswrapper[4896]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 16:47:24 crc kubenswrapper[4896]: [-]metric-collection-timely failed: reason withheld Jan 26 16:47:24 crc kubenswrapper[4896]: [+]metadata-informer-sync ok Jan 26 16:47:24 crc kubenswrapper[4896]: livez check failed Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.264482 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" podUID="1672fa36-cd09-47c9-bb88-ab33ef7e7e66" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.264751 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" podUID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": dial tcp 10.217.0.118:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.270103 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" podUID="a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": read tcp 10.217.0.2:42856->10.217.0.109:8081: read: connection reset by peer" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.281884 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": read tcp 10.217.0.2:40750->10.217.0.120:8081: read: connection reset by peer" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.305721 4896 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-wxx4s container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.305788 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-wxx4s" podUID="790beb3d-3eed-4fef-849d-84a13c17f4a7" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.305876 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.306642 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" podUID="7a813859-31b7-4729-865e-46c6ff663209" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": dial tcp 10.217.0.116:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.332349 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" podUID="b0480b36-40e2-426c-a1a8-e02e79fe7a17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345205 4896 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-5fzf2 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345281 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fzf2" podUID="61b9dfca-9718-4ee7-bd12-efd6ab5ca9b5" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345320 4896 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtg7d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345378 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-gtg7d" podUID="22808cdf-7c01-491f-b3f4-d641898edf7b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345423 4896 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6575bc9f47-rkmnv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345468 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" podUID="dce71be2-915b-4c8e-9a4e-ebe6c278ddcf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.345535 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" podUID="6434b0ee-4d33-4422-a662-3315b2f5499c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.354926 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" podUID="f493b2ea-1515-42db-ac1c-ea1a7121e070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.355022 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podUID="b3272d78-4dde-4997-9316-24a84c00f4c8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.368818 4896 patch_prober.go:28] interesting pod/thanos-querier-c5586d8c9-f4qc2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.368894 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-c5586d8c9-f4qc2" podUID="5ac5577e-a45b-4e15-aa54-d3bd9c8ca092" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.75:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.386276 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podUID="b3272d78-4dde-4997-9316-24a84c00f4c8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.387722 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.388201 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402106 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" podUID="b54a446e-c064-4867-91fa-55f96ea9d87e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": dial tcp 10.217.0.99:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402188 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402493 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402710 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402746 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" podUID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402789 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" podUID="c44d6ef8-c52f-4a31-8a33-1ee01d7e969a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402819 4896 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw2tr container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402834 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podUID="3a8b421e-f755-4bc6-89f7-03aa4a309a87" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402861 4896 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw2tr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.402872 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw2tr" podUID="3a8b421e-f755-4bc6-89f7-03aa4a309a87" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.404303 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" podUID="a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.404353 4896 patch_prober.go:28] interesting pod/metrics-server-b94dd49c-f92bj container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.404371 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" podUID="1672fa36-cd09-47c9-bb88-ab33ef7e7e66" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.77:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.422700 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.422755 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.423831 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.424780 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-k7ctr" podUID="6b19c675-ac2e-4855-8368-79f9812f6a86" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.424963 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" podUID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.426744 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" podUID="8ac5298a-429c-47d6-9436-34bd2bd1fdec" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.424429 4896 patch_prober.go:28] interesting pod/downloads-7954f5f757-rbmml container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.427950 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rbmml" podUID="a005fba8-0843-41a6-90eb-67a2aa6d0580" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.434221 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" podUID="8ac5298a-429c-47d6-9436-34bd2bd1fdec" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.446137 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.446225 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" podUID="f2c6d7a1-690c-4364-a2ea-25e955a38782" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": dial tcp 10.217.0.119:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.480781 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" podUID="c44d6ef8-c52f-4a31-8a33-1ee01d7e969a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": dial tcp 10.217.0.101:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.501327 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" podUID="1c532b54-34b3-4b51-bbd3-1e3bd39d5958" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": dial tcp 10.217.0.107:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.501480 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" podUID="1c532b54-34b3-4b51-bbd3-1e3bd39d5958" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": dial tcp 10.217.0.107:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: E0126 16:47:22.561404 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.576470 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.038760472s: [/var/lib/containers/storage/overlay/7a3ee1880e6c8f5f2741a765b9bd358e9a9bc4801f609a9235811c22d5833e8d/diff /var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-auditor/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.655867 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" podUID="f493b2ea-1515-42db-ac1c-ea1a7121e070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.671789 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" podUID="8a59a62f-3748-43b7-baa0-cd121242caea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8080/readyz\": dial tcp 10.217.0.91:8080: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.691743 4896 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.691799 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.757790 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" podUID="8a59a62f-3748-43b7-baa0-cd121242caea" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8080/readyz\": dial tcp 10.217.0.91:8080: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.985796 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": dial tcp 10.217.0.120:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:22.985936 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" podUID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": dial tcp 10.217.0.120:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.057906 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" podUID="b54a446e-c064-4867-91fa-55f96ea9d87e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": dial tcp 10.217.0.99:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.283777 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" podUID="f493b2ea-1515-42db-ac1c-ea1a7121e070" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: E0126 16:47:23.361563 4896 kubelet.go:2359] "Skipping pod synchronization" err="container runtime is down" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.572463 4896 patch_prober.go:28] interesting pod/logging-loki-gateway-785c7cc549-thnm6 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.572912 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-785c7cc549-thnm6" podUID="9ef5e225-61d8-4ca8-9bc1-43e583ad71be" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.578481 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" podUID="6434b0ee-4d33-4422-a662-3315b2f5499c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": dial tcp 10.217.0.115:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.616896 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-bhztt" podUID="2496d14f-9aa6-4ee7-9db9-5bca63fa5a54" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.669211 4896 trace.go:236] Trace[1716042156]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (26-Jan-2026 16:47:21.901) (total time: 1687ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1716042156]: [1.687353888s] [1.687353888s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.698988 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.162099736s: [/var/lib/containers/storage/overlay/1fe8dbe9c7d84553d566e79258ae974c7428ae1454a6ce5497515d5467292816/diff ]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.699625 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.162739512s: [/var/lib/containers/storage/overlay/1daa3ac2f51e4a703cfa849784df61d1e2d56d96f29cec692c444b6d6fdd1e3c/diff /var/log/pods/openshift-monitoring_kube-state-metrics-777cb5bd5d-x9wnt_14531d98-96ef-4629-9f9f-4797c4480849/kube-rbac-proxy-main/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.713293 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.176405404s: [/var/lib/containers/storage/overlay/492ae21964c2eaf08011e65f13db17390aa1631ab32f832b64aa6a1f8b0f126f/diff /var/log/pods/openshift-monitoring_kube-state-metrics-777cb5bd5d-x9wnt_14531d98-96ef-4629-9f9f-4797c4480849/kube-state-metrics/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.713787 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.176898836s: [/var/lib/containers/storage/overlay/7124a290952dcd3ae1e62b2e395950fe6079b1016de3f8b1b36b53f8cff90c0f/diff /var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-h47xb_551d129e-dcc5-4e55-89d1-68607191e923/authentication-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.726267 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.188818967s: [/var/lib/containers/storage/overlay/b0e63f17a8b0cf049e6a2414521a04c1038e5207276100e39eb6d3f64d625926/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.774653 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" podUID="8c799412-6936-4161-8d4e-244bc94c0d69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": dial tcp 10.217.0.100:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.775192 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" podUID="16c521f5-6f5f-43e3-a670-9f6ab6312d9c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": dial tcp 10.217.0.102:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.790148 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" podUID="f6fe08af-0b15-4be3-8473-6a983d21ebe3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.790282 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" podUID="8c799412-6936-4161-8d4e-244bc94c0d69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": dial tcp 10.217.0.100:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.887211 4896 trace.go:236] Trace[1402846057]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (26-Jan-2026 16:47:22.108) (total time: 1778ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1402846057]: [1.778689326s] [1.778689326s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.925618 4896 trace.go:236] Trace[789603361]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (26-Jan-2026 16:47:21.372) (total time: 2553ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[789603361]: [2.553260494s] [2.553260494s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.925879 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.388924897s: [/var/lib/containers/storage/overlay/458c9fa4ab32f8d233f02e13349ea5574c15a06226b986c9863560d516c17a63/diff /var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-kvnzb_8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.929436 4896 scope.go:117] "RemoveContainer" containerID="c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.931975 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" podUID="03cf04a4-606b-44b9-9aee-86e4b0a8a1eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.964537 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.427750134s: [/var/lib/containers/storage/overlay/21e6e5fe884306ce093f2ae92bd7bbfc942877c33d683e7c23ffbeb8957d9098/diff /var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:23.980795 4896 trace.go:236] Trace[1373891927]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (26-Jan-2026 16:47:22.892) (total time: 1088ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1373891927]: [1.088540896s] [1.088540896s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.005207 4896 generic.go:334] "Generic (PLEG): container finished" podID="2496a24c-43ae-4ce4-8996-60c6e7282bfa" containerID="ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.018011 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.481116096s: [/var/lib/containers/storage/overlay/b75e17443f9dace73ec4f62b5e027bacabcb38d760c9e20311b7912830cc1f1e/diff ]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.018532 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.481627197s: [/var/lib/containers/storage/overlay/d92d8603bd1e18e73c83527c26c4ead06fd16949bf1cccc4f1e1688fd8b0fa93/diff /var/log/pods/openshift-dns-operator_dns-operator-744455d44c-p8n8h_4cb0cc2a-b5c6-4599-bfea-59703789fb7b/dns-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.032420 4896 generic.go:334] "Generic (PLEG): container finished" podID="7a813859-31b7-4729-865e-46c6ff663209" containerID="a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.071803 4896 generic.go:334] "Generic (PLEG): container finished" podID="dce71be2-915b-4c8e-9a4e-ebe6c278ddcf" containerID="28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.090868 4896 generic.go:334] "Generic (PLEG): container finished" podID="1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3" containerID="d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.129395 4896 trace.go:236] Trace[1909350770]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-7pnqc" (26-Jan-2026 16:47:22.985) (total time: 1143ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1909350770]: [1.143588188s] [1.143588188s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.185070 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.201781 4896 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.836588134s: [/var/lib/containers/storage/overlay/878fe08404737efba372d7f0dffffc8acb151a6268a458004418f524b7e77b2d/diff /var/log/pods/openstack_glance-default-external-api-0_4c1c45d1-a81c-4b0d-b5ba-cac9e8704701/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.213350 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" podUID="03cf04a4-606b-44b9-9aee-86e4b0a8a1eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": dial tcp 10.217.0.106:8081: connect: connection refused" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.226084 4896 trace.go:236] Trace[1782905550]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (26-Jan-2026 16:47:21.409) (total time: 2816ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1782905550]: [2.816939115s] [2.816939115s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.283463 4896 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4efaa12b7d99b6a99a3d980d7e883403794e680c3d1321005022ec1dfcdfd5bd" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.304304 4896 generic.go:334] "Generic (PLEG): container finished" podID="fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4" containerID="4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.326964 4896 trace.go:236] Trace[2115187977]: "iptables ChainExists" (26-Jan-2026 16:47:21.569) (total time: 2757ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[2115187977]: [2.757399574s] [2.757399574s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.330420 4896 generic.go:334] "Generic (PLEG): container finished" podID="8ac5298a-429c-47d6-9436-34bd2bd1fdec" containerID="aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.336249 4896 trace.go:236] Trace[1703876733]: "Calculate volume metrics of kube-api-access-gtlbr for pod openshift-cluster-machine-approver/machine-approver-56656f9798-gx9b8" (26-Jan-2026 16:47:21.373) (total time: 2962ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[1703876733]: [2.962840232s] [2.962840232s] END Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.385748 4896 trace.go:236] Trace[556367250]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (26-Jan-2026 16:47:21.523) (total time: 2861ms): Jan 26 16:47:24 crc kubenswrapper[4896]: Trace[556367250]: [2.861800019s] [2.861800019s] END Jan 26 16:47:24 crc kubenswrapper[4896]: W0126 16:47:24.492407 4896 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope/cpu.max": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope/cpu.max: no such device Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.588476 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerName="ceilometer-central-agent" probeResult="failure" output=< Jan 26 16:47:24 crc kubenswrapper[4896]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 26 16:47:24 crc kubenswrapper[4896]: > Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.628297 4896 generic.go:334] "Generic (PLEG): container finished" podID="03cf04a4-606b-44b9-9aee-86e4b0a8a1eb" containerID="5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.652781 4896 generic.go:334] "Generic (PLEG): container finished" podID="3eac11e1-3f7e-467c-b7f7-038d29e23848" containerID="5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.667948 4896 generic.go:334] "Generic (PLEG): container finished" podID="c44d6ef8-c52f-4a31-8a33-1ee01d7e969a" containerID="b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.682565 4896 generic.go:334] "Generic (PLEG): container finished" podID="8c799412-6936-4161-8d4e-244bc94c0d69" containerID="bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.694081 4896 generic.go:334] "Generic (PLEG): container finished" podID="b54a446e-c064-4867-91fa-55f96ea9d87e" containerID="bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.701815 4896 generic.go:334] "Generic (PLEG): container finished" podID="8a59a62f-3748-43b7-baa0-cd121242caea" containerID="7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.706860 4896 generic.go:334] "Generic (PLEG): container finished" podID="29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab" containerID="02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.734356 4896 generic.go:334] "Generic (PLEG): container finished" podID="f2c6d7a1-690c-4364-a2ea-25e955a38782" containerID="63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.742798 4896 generic.go:334] "Generic (PLEG): container finished" podID="b3272d78-4dde-4997-9316-24a84c00f4c8" containerID="dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.806687 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.810849 4896 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.833040 4896 generic.go:334] "Generic (PLEG): container finished" podID="1c532b54-34b3-4b51-bbd3-1e3bd39d5958" containerID="a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.841732 4896 generic.go:334] "Generic (PLEG): container finished" podID="16c521f5-6f5f-43e3-a670-9f6ab6312d9c" containerID="4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.852096 4896 generic.go:334] "Generic (PLEG): container finished" podID="8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b" containerID="e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.859209 4896 generic.go:334] "Generic (PLEG): container finished" podID="b0480b36-40e2-426c-a1a8-e02e79fe7a17" containerID="de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.868795 4896 generic.go:334] "Generic (PLEG): container finished" podID="a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d" containerID="fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.879349 4896 generic.go:334] "Generic (PLEG): container finished" podID="f6fe08af-0b15-4be3-8473-6a983d21ebe3" containerID="1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.899100 4896 generic.go:334] "Generic (PLEG): container finished" podID="f493b2ea-1515-42db-ac1c-ea1a7121e070" containerID="111634478bf1516e33a6ae93ff14a4c5bf7f6cdffd17c3a3d2b9361aa72e738c" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.909769 4896 generic.go:334] "Generic (PLEG): container finished" podID="61be8fa4-3ad2-4745-88ab-850db16c5707" containerID="935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.920972 4896 generic.go:334] "Generic (PLEG): container finished" podID="bc769396-13b5-4066-b7fc-93a3f87a50ff" containerID="159f7fc7a1f1c3ad5e00f288de4260d048d4b849aa2351b9ca11cad0dae92873" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.924967 4896 generic.go:334] "Generic (PLEG): container finished" podID="b8f08a13-e22d-4147-91c2-07c51dbfb83d" containerID="9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.933999 4896 generic.go:334] "Generic (PLEG): container finished" podID="6434b0ee-4d33-4422-a662-3315b2f5499c" containerID="19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0" exitCode=1 Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.965397 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" event={"ID":"2496a24c-43ae-4ce4-8996-60c6e7282bfa","Type":"ContainerDied","Data":"ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707"} Jan 26 16:47:24 crc kubenswrapper[4896]: I0126 16:47:24.999204 4896 scope.go:117] "RemoveContainer" containerID="28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.021437 4896 scope.go:117] "RemoveContainer" containerID="ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124709 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124750 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" event={"ID":"7a813859-31b7-4729-865e-46c6ff663209","Type":"ContainerDied","Data":"a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124775 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" event={"ID":"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf","Type":"ContainerDied","Data":"28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124792 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" event={"ID":"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3","Type":"ContainerDied","Data":"d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124803 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4efaa12b7d99b6a99a3d980d7e883403794e680c3d1321005022ec1dfcdfd5bd"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124831 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" event={"ID":"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4","Type":"ContainerDied","Data":"4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124843 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" event={"ID":"8ac5298a-429c-47d6-9436-34bd2bd1fdec","Type":"ContainerDied","Data":"aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124855 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" event={"ID":"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb","Type":"ContainerDied","Data":"5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" event={"ID":"3eac11e1-3f7e-467c-b7f7-038d29e23848","Type":"ContainerDied","Data":"5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124879 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" event={"ID":"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a","Type":"ContainerDied","Data":"b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124905 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" event={"ID":"8c799412-6936-4161-8d4e-244bc94c0d69","Type":"ContainerDied","Data":"bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124944 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" event={"ID":"b54a446e-c064-4867-91fa-55f96ea9d87e","Type":"ContainerDied","Data":"bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124964 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" event={"ID":"8a59a62f-3748-43b7-baa0-cd121242caea","Type":"ContainerDied","Data":"7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124979 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" event={"ID":"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab","Type":"ContainerDied","Data":"02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.124996 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" event={"ID":"f2c6d7a1-690c-4364-a2ea-25e955a38782","Type":"ContainerDied","Data":"63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125007 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" event={"ID":"b3272d78-4dde-4997-9316-24a84c00f4c8","Type":"ContainerDied","Data":"dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125045 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" event={"ID":"1c532b54-34b3-4b51-bbd3-1e3bd39d5958","Type":"ContainerDied","Data":"a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125056 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" event={"ID":"16c521f5-6f5f-43e3-a670-9f6ab6312d9c","Type":"ContainerDied","Data":"4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125071 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" event={"ID":"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b","Type":"ContainerDied","Data":"e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125082 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" event={"ID":"b0480b36-40e2-426c-a1a8-e02e79fe7a17","Type":"ContainerDied","Data":"de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125093 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" event={"ID":"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d","Type":"ContainerDied","Data":"fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125112 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" event={"ID":"f6fe08af-0b15-4be3-8473-6a983d21ebe3","Type":"ContainerDied","Data":"1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125124 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" event={"ID":"f493b2ea-1515-42db-ac1c-ea1a7121e070","Type":"ContainerDied","Data":"111634478bf1516e33a6ae93ff14a4c5bf7f6cdffd17c3a3d2b9361aa72e738c"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125135 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" event={"ID":"61be8fa4-3ad2-4745-88ab-850db16c5707","Type":"ContainerDied","Data":"935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125147 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" event={"ID":"bc769396-13b5-4066-b7fc-93a3f87a50ff","Type":"ContainerDied","Data":"159f7fc7a1f1c3ad5e00f288de4260d048d4b849aa2351b9ca11cad0dae92873"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125160 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" event={"ID":"b8f08a13-e22d-4147-91c2-07c51dbfb83d","Type":"ContainerDied","Data":"9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.125171 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" event={"ID":"6434b0ee-4d33-4422-a662-3315b2f5499c","Type":"ContainerDied","Data":"19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0"} Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.127653 4896 scope.go:117] "RemoveContainer" containerID="c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.145853 4896 scope.go:117] "RemoveContainer" containerID="bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.152945 4896 scope.go:117] "RemoveContainer" containerID="7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.155595 4896 scope.go:117] "RemoveContainer" containerID="d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.158631 4896 scope.go:117] "RemoveContainer" containerID="fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.159593 4896 scope.go:117] "RemoveContainer" containerID="159f7fc7a1f1c3ad5e00f288de4260d048d4b849aa2351b9ca11cad0dae92873" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.167183 4896 scope.go:117] "RemoveContainer" containerID="aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.167406 4896 scope.go:117] "RemoveContainer" containerID="bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.168348 4896 scope.go:117] "RemoveContainer" containerID="dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.170359 4896 scope.go:117] "RemoveContainer" containerID="5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.172502 4896 scope.go:117] "RemoveContainer" containerID="de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.173699 4896 scope.go:117] "RemoveContainer" containerID="935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.175408 4896 scope.go:117] "RemoveContainer" containerID="cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.176455 4896 scope.go:117] "RemoveContainer" containerID="4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.177332 4896 scope.go:117] "RemoveContainer" containerID="4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.178429 4896 scope.go:117] "RemoveContainer" containerID="5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.178746 4896 scope.go:117] "RemoveContainer" containerID="63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.179964 4896 scope.go:117] "RemoveContainer" containerID="b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.193497 4896 scope.go:117] "RemoveContainer" containerID="4efaa12b7d99b6a99a3d980d7e883403794e680c3d1321005022ec1dfcdfd5bd" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.197218 4896 scope.go:117] "RemoveContainer" containerID="a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.200552 4896 scope.go:117] "RemoveContainer" containerID="02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.209444 4896 scope.go:117] "RemoveContainer" containerID="e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.209917 4896 scope.go:117] "RemoveContainer" containerID="9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.210047 4896 scope.go:117] "RemoveContainer" containerID="111634478bf1516e33a6ae93ff14a4c5bf7f6cdffd17c3a3d2b9361aa72e738c" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.210150 4896 scope.go:117] "RemoveContainer" containerID="19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.229439 4896 scope.go:117] "RemoveContainer" containerID="1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.262842 4896 scope.go:117] "RemoveContainer" containerID="a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.527922 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.723185 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.723263 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 16:47:25 crc kubenswrapper[4896]: E0126 16:47:25.760962 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c6d7a1_690c_4364_a2ea_25e955a38782.slice/crio-conmon-63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0480b36_40e2_426c_a1a8_e02e79fe7a17.slice/crio-conmon-de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f08a13_e22d_4147_91c2_07c51dbfb83d.slice/crio-conmon-9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-conmon-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c521f5_6f5f_43e3_a670_9f6ab6312d9c.slice/crio-conmon-4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eac11e1_3f7e_467c_b7f7_038d29e23848.slice/crio-conmon-5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434b0ee_4d33_4422_a662_3315b2f5499c.slice/crio-conmon-19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-conmon-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3272d78_4dde_4997_9316_24a84c00f4c8.slice/crio-conmon-dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afb8bf_1d53_45a3_b67c_a1ebc26aa4ab.slice/crio-conmon-02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-conmon-aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fe08af_0b15_4be3_8473_6a983d21ebe3.slice/crio-conmon-1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c799412_6936_4161_8d4e_244bc94c0d69.slice/crio-conmon-bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-conmon-4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-conmon-fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-conmon-935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-conmon-7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf7b7e2_7b44_4a9d_aa3d_31ed21b66dc3.slice/crio-conmon-d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a813859_31b7_4729_865e_46c6ff663209.slice/crio-conmon-a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54a446e_c064_4867_91fa_55f96ea9d87e.slice/crio-conmon-bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:25 crc kubenswrapper[4896]: E0126 16:47:25.763784 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-conmon-4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434b0ee_4d33_4422_a662_3315b2f5499c.slice/crio-conmon-19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54a446e_c064_4867_91fa_55f96ea9d87e.slice/crio-conmon-bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-conmon-aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c6d7a1_690c_4364_a2ea_25e955a38782.slice/crio-conmon-63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fe08af_0b15_4be3_8473_6a983d21ebe3.slice/crio-conmon-1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0480b36_40e2_426c_a1a8_e02e79fe7a17.slice/crio-conmon-de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf7b7e2_7b44_4a9d_aa3d_31ed21b66dc3.slice/crio-conmon-d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-conmon-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a813859_31b7_4729_865e_46c6ff663209.slice/crio-conmon-a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f08a13_e22d_4147_91c2_07c51dbfb83d.slice/crio-conmon-9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-conmon-7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-conmon-935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-conmon-fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afb8bf_1d53_45a3_b67c_a1ebc26aa4ab.slice/crio-conmon-02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eac11e1_3f7e_467c_b7f7_038d29e23848.slice/crio-conmon-5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44d6ef8_c52f_4a31_8a33_1ee01d7e969a.slice/crio-conmon-b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c521f5_6f5f_43e3_a670_9f6ab6312d9c.slice/crio-conmon-4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-conmon-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c799412_6936_4161_8d4e_244bc94c0d69.slice/crio-conmon-bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:25 crc kubenswrapper[4896]: E0126 16:47:25.765518 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eac11e1_3f7e_467c_b7f7_038d29e23848.slice/crio-conmon-5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fe08af_0b15_4be3_8473_6a983d21ebe3.slice/crio-conmon-1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-conmon-aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-conmon-935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f08a13_e22d_4147_91c2_07c51dbfb83d.slice/crio-conmon-9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54a446e_c064_4867_91fa_55f96ea9d87e.slice/crio-conmon-bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c521f5_6f5f_43e3_a670_9f6ab6312d9c.slice/crio-conmon-4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-conmon-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0480b36_40e2_426c_a1a8_e02e79fe7a17.slice/crio-conmon-de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c799412_6936_4161_8d4e_244bc94c0d69.slice/crio-conmon-bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434b0ee_4d33_4422_a662_3315b2f5499c.slice/crio-conmon-19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c6d7a1_690c_4364_a2ea_25e955a38782.slice/crio-conmon-63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3272d78_4dde_4997_9316_24a84c00f4c8.slice/crio-conmon-dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cf04a4_606b_44b9_9aee_86e4b0a8a1eb.slice/crio-conmon-5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-conmon-4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44d6ef8_c52f_4a31_8a33_1ee01d7e969a.slice/crio-conmon-b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-conmon-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-conmon-fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-conmon-7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf7b7e2_7b44_4a9d_aa3d_31ed21b66dc3.slice/crio-conmon-d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a813859_31b7_4729_865e_46c6ff663209.slice/crio-conmon-a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afb8bf_1d53_45a3_b67c_a1ebc26aa4ab.slice/crio-conmon-02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:25 crc kubenswrapper[4896]: E0126 16:47:25.766642 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44d6ef8_c52f_4a31_8a33_1ee01d7e969a.slice/crio-conmon-b42e14c220cbcf028eca47b182a6af9adee6161d71e69217862ca39600afa7b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-conmon-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3272d78_4dde_4997_9316_24a84c00f4c8.slice/crio-conmon-dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434b0ee_4d33_4422_a662_3315b2f5499c.slice/crio-conmon-19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eac11e1_3f7e_467c_b7f7_038d29e23848.slice/crio-conmon-5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afb8bf_1d53_45a3_b67c_a1ebc26aa4ab.slice/crio-conmon-02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-conmon-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54a446e_c064_4867_91fa_55f96ea9d87e.slice/crio-conmon-bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f08a13_e22d_4147_91c2_07c51dbfb83d.slice/crio-conmon-9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c6d7a1_690c_4364_a2ea_25e955a38782.slice/crio-conmon-63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0480b36_40e2_426c_a1a8_e02e79fe7a17.slice/crio-conmon-de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-conmon-4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-conmon-935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf7b7e2_7b44_4a9d_aa3d_31ed21b66dc3.slice/crio-conmon-d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-conmon-aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c521f5_6f5f_43e3_a670_9f6ab6312d9c.slice/crio-conmon-4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a813859_31b7_4729_865e_46c6ff663209.slice/crio-conmon-a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-conmon-fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cf04a4_606b_44b9_9aee_86e4b0a8a1eb.slice/crio-conmon-5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-conmon-7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c799412_6936_4161_8d4e_244bc94c0d69.slice/crio-conmon-bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:25 crc kubenswrapper[4896]: E0126 16:47:25.770473 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-conmon-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8a478d_ccdf_4d2b_b27f_58fde92fd7d4.slice/crio-conmon-4800a8dee2868500e5b05f89f5f5159650a8d469986769677ad432e142380527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c799412_6936_4161_8d4e_244bc94c0d69.slice/crio-conmon-bb3f921ecaa845f65a69ffef86f443f5582c49d483d548cf73282ee0d3ac44e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-conmon-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fe08af_0b15_4be3_8473_6a983d21ebe3.slice/crio-conmon-1b1d342276e5aa96cf6fd1af2f7b1cf53b80b2ef361428f5e91f7389ab22cfc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c521f5_6f5f_43e3_a670_9f6ab6312d9c.slice/crio-conmon-4d8f3c8f56321d87a9f879f68195efb8365037e4355738a9a76f5b1e76f20912.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eac11e1_3f7e_467c_b7f7_038d29e23848.slice/crio-conmon-5f4b22237a47ecb49da645b69fd0e372c5e23bd0876406a125ae21521409e3a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a59a62f_3748_43b7_baa0_cd121242caea.slice/crio-conmon-7b9c7979e890a17d08bce07a2910cfcf4448abc35922a8a7f24cf57380323923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6434b0ee_4d33_4422_a662_3315b2f5499c.slice/crio-conmon-19980e70101d0ab9dbc27d61b71f140d01421676f372b3341300a59254b680d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-conmon-cf9f09821a723ec5659e627039b3c232f20099244c679a05ae30772e53a4ecd3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54a446e_c064_4867_91fa_55f96ea9d87e.slice/crio-conmon-bd50b1361ff6e2e62649d87537458d691b6600815324d1333bb39b04dbc59a9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0480b36_40e2_426c_a1a8_e02e79fe7a17.slice/crio-conmon-de1e5ab00beed06142d5df17917b05443f7dc5c87ac411df23cb7ba060cb661f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afb8bf_1d53_45a3_b67c_a1ebc26aa4ab.slice/crio-conmon-02879ab7463d4c4e7de784ca55105138c7ac1121d4d67280e21e350269ad3124.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf7b7e2_7b44_4a9d_aa3d_31ed21b66dc3.slice/crio-conmon-d1e62bf9e0f7d3d5cafd06266bc1eb3d1517e7b7d392cca2193e670a6089731e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a0fae6_65fb_46f8_9b0a_2cbae0665e6d.slice/crio-conmon-fa3bc10e1988ac75a7a59f040338a30ad791a6f071a826002a248307d997b577.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce71be2_915b_4c8e_9a4e_ebe6c278ddcf.slice/crio-28e222335ae74dc624fde17d726dc3b340c81aacc42bb147ce45a19f33d9dd80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61be8fa4_3ad2_4745_88ab_850db16c5707.slice/crio-conmon-935e806b0d1615e24522078301b91d1977b737d88e10d6089143548c5b960276.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c6d7a1_690c_4364_a2ea_25e955a38782.slice/crio-conmon-63909c4d3d73c7f2ede557fc27491a2dfc1c3dfacce922419ae1f06858aa8e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cf04a4_606b_44b9_9aee_86e4b0a8a1eb.slice/crio-conmon-5bc3841ed3803ee092aa9c08877c399ba91bb990b1b695d56d4fc8c54493fcff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c532b54_34b3_4b51_bbd3_1e3bd39d5958.slice/crio-a7b5be41a1bb25ffba3482c2e43a2fbc8cd5f93dea0e449d4f0ed628cc41ad82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac5298a_429c_47d6_9436_34bd2bd1fdec.slice/crio-conmon-aaf4524b192c2c7f97c950a83e6da0105eb584fb0dd4b09efd3367288b059a3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3272d78_4dde_4997_9316_24a84c00f4c8.slice/crio-conmon-dacfd3b17398799ba9edec041b97ea0be8a9c5ab0ba472748ad9e611611d8d88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a813859_31b7_4729_865e_46c6ff663209.slice/crio-conmon-a08abfca1e048585b788ee12f2560a027cd7be2ef8a39d9177ec40f80d860455.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f08a13_e22d_4147_91c2_07c51dbfb83d.slice/crio-conmon-9ea0acb6629981ce1dd85e1c5104067ac167bd81bcdd52e47d4ddff1c26b5601.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:25 crc kubenswrapper[4896]: I0126 16:47:25.952528 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:47:26 crc kubenswrapper[4896]: I0126 16:47:26.102118 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"37931cdbd03d2e22cec7d40fc16a7e397f4e40881a1429b12e3ff38a9e6b816c"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 26 16:47:26 crc kubenswrapper[4896]: I0126 16:47:26.102301 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerName="ceilometer-central-agent" containerID="cri-o://37931cdbd03d2e22cec7d40fc16a7e397f4e40881a1429b12e3ff38a9e6b816c" gracePeriod=30 Jan 26 16:47:26 crc kubenswrapper[4896]: I0126 16:47:26.334760 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 16:47:27 crc kubenswrapper[4896]: E0126 16:47:27.859767 4896 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051 from index: no such id: 'c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d'" containerID="c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d" Jan 26 16:47:27 crc kubenswrapper[4896]: I0126 16:47:27.860177 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d"} err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox c8a5a8bc5167787496ac77793720a0b48959181af7ed02811a1fc0eac8116051 from index: no such id: 'c0e5a1b182c162f44f0cc9d9eba8bb355847d82ff6bdee41094004449b4d797d'" Jan 26 16:47:27 crc kubenswrapper[4896]: I0126 16:47:27.983422 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 16:47:28 crc kubenswrapper[4896]: I0126 16:47:28.160015 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 16:47:28 crc kubenswrapper[4896]: I0126 16:47:28.404308 4896 patch_prober.go:28] interesting pod/metrics-server-b94dd49c-f92bj container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 16:47:28 crc kubenswrapper[4896]: [+]log ok Jan 26 16:47:28 crc kubenswrapper[4896]: [+]poststarthook/max-in-flight-filter ok Jan 26 16:47:28 crc kubenswrapper[4896]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 16:47:28 crc kubenswrapper[4896]: [-]metric-collection-timely failed: reason withheld Jan 26 16:47:28 crc kubenswrapper[4896]: [+]metadata-informer-sync ok Jan 26 16:47:28 crc kubenswrapper[4896]: livez check failed Jan 26 16:47:28 crc kubenswrapper[4896]: I0126 16:47:28.404369 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-b94dd49c-f92bj" podUID="1672fa36-cd09-47c9-bb88-ab33ef7e7e66" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.217617 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" event={"ID":"b0480b36-40e2-426c-a1a8-e02e79fe7a17","Type":"ContainerStarted","Data":"c17d6ff188dbbbfd3015beada29bbb67e8637e86c58382389734af7a8c590744"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.218367 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.234851 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" event={"ID":"8a59a62f-3748-43b7-baa0-cd121242caea","Type":"ContainerStarted","Data":"99e1ce61e9bdb62d60543a02e4e95853a5399eb4ea8bf81f964fb4ffc7e75399"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.235349 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.241693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" event={"ID":"f6fe08af-0b15-4be3-8473-6a983d21ebe3","Type":"ContainerStarted","Data":"2d791f012f657d21b3ac241e755016aa5d87daf8266696b4f10ba9820c2836aa"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.243359 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.253185 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" event={"ID":"3eac11e1-3f7e-467c-b7f7-038d29e23848","Type":"ContainerStarted","Data":"84662c79d05e60c769746d8a733129c5439ab16962193097740ac104d45ca042"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.254631 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.265599 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" event={"ID":"03cf04a4-606b-44b9-9aee-86e4b0a8a1eb","Type":"ContainerStarted","Data":"ad4788aaf5f1d0dc61ac36aa890d3271a3d2ba7bfcbafec6c3025b0bed5f6453"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.267034 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.297926 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" event={"ID":"a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d","Type":"ContainerStarted","Data":"8cd0c8f33969f4ec6e0b489d39877151f39596392a66ca9296e219d4d2682ecd"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.299361 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.319818 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pgcx" event={"ID":"bc769396-13b5-4066-b7fc-93a3f87a50ff","Type":"ContainerStarted","Data":"15abb39953d9277c39c34dc50ebc526490450c4509c7f97f91b772d9838755f1"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.337226 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" event={"ID":"7a813859-31b7-4729-865e-46c6ff663209","Type":"ContainerStarted","Data":"fcac3bfcdf6affe167a971f6693e0eb3a40eb81d7603f309beab741e8852294a"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.337755 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.363688 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.368216 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35f770c357932876f65819a3dd365d5e0b0725ee60e3bb0deaa6c5977e996ed2"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.388038 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" event={"ID":"1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3","Type":"ContainerStarted","Data":"d7b896264de52e8c470776b7a49c0728e31dffa811dfee4362cd2a0de6f835c5"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.389970 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.402741 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" event={"ID":"b54a446e-c064-4867-91fa-55f96ea9d87e","Type":"ContainerStarted","Data":"194485c344f21b4def267c56b9137db44c46ae1c229b81fab2651d26a0b71520"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.402859 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.419501 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" event={"ID":"f2c6d7a1-690c-4364-a2ea-25e955a38782","Type":"ContainerStarted","Data":"ba3b7e4df5367c31cf55371d8c1a316fe3c24e739675d39a3967996b05b8dbb2"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.420714 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.427368 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.431976 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" event={"ID":"2496a24c-43ae-4ce4-8996-60c6e7282bfa","Type":"ContainerStarted","Data":"601e478960b45d8099a411b60c2ca40ea4e9170f0add903ec510a5d7877142c9"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.433772 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.442231 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" event={"ID":"6434b0ee-4d33-4422-a662-3315b2f5499c","Type":"ContainerStarted","Data":"65471b3a10968ad16769312bf194e00b72ca7b71c301f38d87df3d6a29c74a80"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.442407 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.459340 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.461306 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" event={"ID":"dce71be2-915b-4c8e-9a4e-ebe6c278ddcf","Type":"ContainerStarted","Data":"3ab79372bb8609e42aa8a57547479a94981d623379d51ba9009e1ca0d2d32444"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.463087 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.466780 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.482612 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" event={"ID":"8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b","Type":"ContainerStarted","Data":"862d3488c2eb1c025df27a9c09c638eb97d3b056655d52ae4221ff26c1f51b8b"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.486506 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.513283 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" event={"ID":"1c532b54-34b3-4b51-bbd3-1e3bd39d5958","Type":"ContainerStarted","Data":"b31dd7b0bfb429ccda94931d47b62062492861392f31fd3f06d97f16312e08b8"} Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.513905 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.658166 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:47:29 crc kubenswrapper[4896]: E0126 16:47:29.843545 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:29 crc kubenswrapper[4896]: I0126 16:47:29.922716 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.526323 4896 generic.go:334] "Generic (PLEG): container finished" podID="f20afffa-3480-40b7-a7b8-116bccafaffb" containerID="37931cdbd03d2e22cec7d40fc16a7e397f4e40881a1429b12e3ff38a9e6b816c" exitCode=0 Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.526391 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerDied","Data":"37931cdbd03d2e22cec7d40fc16a7e397f4e40881a1429b12e3ff38a9e6b816c"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.528897 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" event={"ID":"8ac5298a-429c-47d6-9436-34bd2bd1fdec","Type":"ContainerStarted","Data":"3b70b21d8aa97655d7a6064cfde78db3d282bbe127dfbf93a6d219131002161e"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.529053 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.531031 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" event={"ID":"29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab","Type":"ContainerStarted","Data":"f1b7b951211409e5a135b001c393fcffd6ba658084a32108bd82844b17ef12a9"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.531318 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.532759 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" event={"ID":"b8f08a13-e22d-4147-91c2-07c51dbfb83d","Type":"ContainerStarted","Data":"6f2d086c5d6d882d978d2f6a0fb0e12b42d81c95f4cb9a068bb16bfb0ae0fb48"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.533282 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.539429 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" event={"ID":"61be8fa4-3ad2-4745-88ab-850db16c5707","Type":"ContainerStarted","Data":"7cc2343e5f606321e836aa756df8ead34e3f73e235abbaa0837cbc8116a5003e"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.539688 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.542532 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" event={"ID":"c44d6ef8-c52f-4a31-8a33-1ee01d7e969a","Type":"ContainerStarted","Data":"6a37911b3dc8d34cc2f959db1a9fd46cba49ea3afcbe21766ea599661e712b03"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.542640 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.545117 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" event={"ID":"fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4","Type":"ContainerStarted","Data":"49a8e3a728936c129c280da5418a51c362984028d3d0ec169193c5ef873974ff"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.545307 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.547537 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" event={"ID":"f493b2ea-1515-42db-ac1c-ea1a7121e070","Type":"ContainerStarted","Data":"de0b0579328d1200e285797e3aeccf56645775d7248fea7048f4df20d1bb1fcb"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.547757 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.551190 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" event={"ID":"8c799412-6936-4161-8d4e-244bc94c0d69","Type":"ContainerStarted","Data":"4445fbb6781d051db5497c9df830992b038e1551ebf45a9a184565fc1352b9e8"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.551324 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.553414 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" event={"ID":"16c521f5-6f5f-43e3-a670-9f6ab6312d9c","Type":"ContainerStarted","Data":"c03c12ba950d428059ac86df49817c81ee03899362409579d0b90dd1110acfa0"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.554418 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.556818 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" event={"ID":"b3272d78-4dde-4997-9316-24a84c00f4c8","Type":"ContainerStarted","Data":"e1983fccf251dfa00ba553e6567424bb02251e17030b285a8308d94ce076d49c"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.556886 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.559889 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.560258 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0ce70334e82a7a657263165aa51839c79a08405400b9b17baac98dd4e8794235"} Jan 26 16:47:30 crc kubenswrapper[4896]: I0126 16:47:30.726129 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.575748 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f20afffa-3480-40b7-a7b8-116bccafaffb","Type":"ContainerStarted","Data":"9b5666d6ebbc8885d6e88b5f4bc726e44380e98f6bfed0bb3c01d72855c1bc2e"} Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.608113 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:31 crc kubenswrapper[4896]: E0126 16:47:31.627709 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec655b4-80c3-4368-8077-13b6c2a5294b" containerName="collect-profiles" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.627750 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec655b4-80c3-4368-8077-13b6c2a5294b" containerName="collect-profiles" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.628834 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec655b4-80c3-4368-8077-13b6c2a5294b" containerName="collect-profiles" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.689894 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.690042 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.794353 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.794724 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.794850 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlrlj\" (UniqueName: \"kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.897026 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.897111 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlrlj\" (UniqueName: \"kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.897173 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.897900 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.898253 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:31 crc kubenswrapper[4896]: I0126 16:47:31.923277 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlrlj\" (UniqueName: \"kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj\") pod \"community-operators-lqx5b\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:32 crc kubenswrapper[4896]: I0126 16:47:32.074479 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:32 crc kubenswrapper[4896]: I0126 16:47:32.111714 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:47:32 crc kubenswrapper[4896]: I0126 16:47:32.128106 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:47:32 crc kubenswrapper[4896]: W0126 16:47:32.950727 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75492d6f_24c8_44e7_8864_a5afc7cd8e7d.slice/crio-5bb3b4bf4e2c9f7fbefc792d0c2346553a51cb49082189f3a3dbcfe4e6532266 WatchSource:0}: Error finding container 5bb3b4bf4e2c9f7fbefc792d0c2346553a51cb49082189f3a3dbcfe4e6532266: Status 404 returned error can't find the container with id 5bb3b4bf4e2c9f7fbefc792d0c2346553a51cb49082189f3a3dbcfe4e6532266 Jan 26 16:47:32 crc kubenswrapper[4896]: I0126 16:47:32.966808 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:33 crc kubenswrapper[4896]: I0126 16:47:33.634146 4896 generic.go:334] "Generic (PLEG): container finished" podID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerID="0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5" exitCode=0 Jan 26 16:47:33 crc kubenswrapper[4896]: I0126 16:47:33.634313 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerDied","Data":"0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5"} Jan 26 16:47:33 crc kubenswrapper[4896]: I0126 16:47:33.635869 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerStarted","Data":"5bb3b4bf4e2c9f7fbefc792d0c2346553a51cb49082189f3a3dbcfe4e6532266"} Jan 26 16:47:35 crc kubenswrapper[4896]: I0126 16:47:35.549493 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-49kq4" Jan 26 16:47:35 crc kubenswrapper[4896]: I0126 16:47:35.693698 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerStarted","Data":"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c"} Jan 26 16:47:35 crc kubenswrapper[4896]: I0126 16:47:35.727376 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6575bc9f47-rkmnv" Jan 26 16:47:36 crc kubenswrapper[4896]: E0126 16:47:36.140901 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:36 crc kubenswrapper[4896]: I0126 16:47:36.348913 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd" Jan 26 16:47:36 crc kubenswrapper[4896]: I0126 16:47:36.707748 4896 generic.go:334] "Generic (PLEG): container finished" podID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerID="758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c" exitCode=0 Jan 26 16:47:36 crc kubenswrapper[4896]: I0126 16:47:36.707869 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerDied","Data":"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c"} Jan 26 16:47:37 crc kubenswrapper[4896]: I0126 16:47:37.735311 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerStarted","Data":"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2"} Jan 26 16:47:37 crc kubenswrapper[4896]: I0126 16:47:37.776936 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lqx5b" podStartSLOduration=3.144236406 podStartE2EDuration="6.776895875s" podCreationTimestamp="2026-01-26 16:47:31 +0000 UTC" firstStartedPulling="2026-01-26 16:47:33.637038018 +0000 UTC m=+4411.418918411" lastFinishedPulling="2026-01-26 16:47:37.269697487 +0000 UTC m=+4415.051577880" observedRunningTime="2026-01-26 16:47:37.758153458 +0000 UTC m=+4415.540033861" watchObservedRunningTime="2026-01-26 16:47:37.776895875 +0000 UTC m=+4415.558776268" Jan 26 16:47:37 crc kubenswrapper[4896]: I0126 16:47:37.985089 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8f6df5568-rvvb8" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.434262 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7t46g" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.464860 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6wv5s" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.474830 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rp5b4" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.666318 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.810502 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-j92tx" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.922454 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" Jan 26 16:47:39 crc kubenswrapper[4896]: I0126 16:47:39.923977 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jx95g" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.026803 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-948px" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.124525 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lz2hg" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.172995 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cwcgv" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.192390 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.881971 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lvm6z" Jan 26 16:47:41 crc kubenswrapper[4896]: I0126 16:47:41.906253 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kvnzb" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.074739 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.074794 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.108132 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-s2bwr" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.135745 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.137748 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mjzqx" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.159088 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9vwsl" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.169722 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4sl4s" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.233116 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fd4748d4d-2sttl" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.428705 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p82px" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.923512 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:42 crc kubenswrapper[4896]: I0126 16:47:42.973601 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4v8sm" Jan 26 16:47:43 crc kubenswrapper[4896]: I0126 16:47:43.021804 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:43 crc kubenswrapper[4896]: I0126 16:47:43.146351 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d6b58b596-csnd6" Jan 26 16:47:44 crc kubenswrapper[4896]: I0126 16:47:44.889999 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lqx5b" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="registry-server" containerID="cri-o://c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2" gracePeriod=2 Jan 26 16:47:45 crc kubenswrapper[4896]: E0126 16:47:45.132616 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.767873 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.889472 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities\") pod \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.889572 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlrlj\" (UniqueName: \"kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj\") pod \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.889813 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content\") pod \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\" (UID: \"75492d6f-24c8-44e7-8864-a5afc7cd8e7d\") " Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.890591 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities" (OuterVolumeSpecName: "utilities") pod "75492d6f-24c8-44e7-8864-a5afc7cd8e7d" (UID: "75492d6f-24c8-44e7-8864-a5afc7cd8e7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.891317 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.904411 4896 generic.go:334] "Generic (PLEG): container finished" podID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerID="c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2" exitCode=0 Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.904469 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerDied","Data":"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2"} Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.904566 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqx5b" event={"ID":"75492d6f-24c8-44e7-8864-a5afc7cd8e7d","Type":"ContainerDied","Data":"5bb3b4bf4e2c9f7fbefc792d0c2346553a51cb49082189f3a3dbcfe4e6532266"} Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.904625 4896 scope.go:117] "RemoveContainer" containerID="c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.904641 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqx5b" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.931957 4896 scope.go:117] "RemoveContainer" containerID="758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.945296 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75492d6f-24c8-44e7-8864-a5afc7cd8e7d" (UID: "75492d6f-24c8-44e7-8864-a5afc7cd8e7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:47:45 crc kubenswrapper[4896]: I0126 16:47:45.994589 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:46 crc kubenswrapper[4896]: E0126 16:47:46.187761 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.575380 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj" (OuterVolumeSpecName: "kube-api-access-zlrlj") pod "75492d6f-24c8-44e7-8864-a5afc7cd8e7d" (UID: "75492d6f-24c8-44e7-8864-a5afc7cd8e7d"). InnerVolumeSpecName "kube-api-access-zlrlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.604439 4896 scope.go:117] "RemoveContainer" containerID="0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.609334 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlrlj\" (UniqueName: \"kubernetes.io/projected/75492d6f-24c8-44e7-8864-a5afc7cd8e7d-kube-api-access-zlrlj\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.757021 4896 scope.go:117] "RemoveContainer" containerID="c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2" Jan 26 16:47:46 crc kubenswrapper[4896]: E0126 16:47:46.757537 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2\": container with ID starting with c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2 not found: ID does not exist" containerID="c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.757656 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2"} err="failed to get container status \"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2\": rpc error: code = NotFound desc = could not find container \"c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2\": container with ID starting with c590f03262decc379992b3bd66d3a22f446a52a996e5f1cc1f73f4141ae84ac2 not found: ID does not exist" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.757694 4896 scope.go:117] "RemoveContainer" containerID="758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c" Jan 26 16:47:46 crc kubenswrapper[4896]: E0126 16:47:46.758092 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c\": container with ID starting with 758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c not found: ID does not exist" containerID="758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.758139 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c"} err="failed to get container status \"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c\": rpc error: code = NotFound desc = could not find container \"758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c\": container with ID starting with 758a0c40e00b630a3779fceaf0c437240755d27bcfd77f097b379508d72de65c not found: ID does not exist" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.758168 4896 scope.go:117] "RemoveContainer" containerID="0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5" Jan 26 16:47:46 crc kubenswrapper[4896]: E0126 16:47:46.765849 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5\": container with ID starting with 0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5 not found: ID does not exist" containerID="0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.765932 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5"} err="failed to get container status \"0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5\": rpc error: code = NotFound desc = could not find container \"0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5\": container with ID starting with 0116ceb5fd682c77a39274528e3df590939170c69b60dff43a71f9ebd12775c5 not found: ID does not exist" Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.836087 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:46 crc kubenswrapper[4896]: I0126 16:47:46.848161 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lqx5b"] Jan 26 16:47:48 crc kubenswrapper[4896]: E0126 16:47:48.118386 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:48 crc kubenswrapper[4896]: E0126 16:47:48.125421 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.775751 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" path="/var/lib/kubelet/pods/75492d6f-24c8-44e7-8864-a5afc7cd8e7d/volumes" Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.813624 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.813684 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.813730 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.814739 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:47:48 crc kubenswrapper[4896]: I0126 16:47:48.814791 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70" gracePeriod=600 Jan 26 16:47:49 crc kubenswrapper[4896]: I0126 16:47:49.962063 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70" exitCode=0 Jan 26 16:47:49 crc kubenswrapper[4896]: I0126 16:47:49.962165 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70"} Jan 26 16:47:49 crc kubenswrapper[4896]: I0126 16:47:49.962745 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108"} Jan 26 16:47:49 crc kubenswrapper[4896]: I0126 16:47:49.962774 4896 scope.go:117] "RemoveContainer" containerID="9a1f439c0d18224c2e78758fdca79bf25e95b14f01dfcb6f993dbf4750b7ea36" Jan 26 16:47:56 crc kubenswrapper[4896]: E0126 16:47:56.536885 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:47:59 crc kubenswrapper[4896]: E0126 16:47:59.807253 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0a37ed_b8af_49ae_9c6c_ed7097f46f3b.slice/crio-conmon-e747ef259672c696c03d07757ac863f265d1858dea318930960f5d45872c81c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2496a24c_43ae_4ce4_8996_60c6e7282bfa.slice/crio-conmon-ddd4e68d3f61942cdb100a3606518ece797493317bc103b7afddaee236179707.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:48:02 crc kubenswrapper[4896]: I0126 16:48:02.756461 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c8c8d84d-97894" Jan 26 16:49:19 crc kubenswrapper[4896]: I0126 16:49:19.529043 4896 trace.go:236] Trace[1962207025]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (26-Jan-2026 16:49:16.142) (total time: 3386ms): Jan 26 16:49:19 crc kubenswrapper[4896]: Trace[1962207025]: [3.386078874s] [3.386078874s] END Jan 26 16:49:19 crc kubenswrapper[4896]: I0126 16:49:19.550030 4896 trace.go:236] Trace[296897239]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (26-Jan-2026 16:49:17.232) (total time: 2317ms): Jan 26 16:49:19 crc kubenswrapper[4896]: Trace[296897239]: [2.317734981s] [2.317734981s] END Jan 26 16:49:19 crc kubenswrapper[4896]: I0126 16:49:19.597010 4896 trace.go:236] Trace[1657576655]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (26-Jan-2026 16:49:16.468) (total time: 3128ms): Jan 26 16:49:19 crc kubenswrapper[4896]: Trace[1657576655]: [3.128395109s] [3.128395109s] END Jan 26 16:50:18 crc kubenswrapper[4896]: I0126 16:50:18.813609 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:50:18 crc kubenswrapper[4896]: I0126 16:50:18.814180 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:50:42 crc kubenswrapper[4896]: I0126 16:50:42.008674 4896 trace.go:236] Trace[1702596359]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (26-Jan-2026 16:50:39.521) (total time: 2487ms): Jan 26 16:50:42 crc kubenswrapper[4896]: Trace[1702596359]: [2.487166973s] [2.487166973s] END Jan 26 16:50:48 crc kubenswrapper[4896]: I0126 16:50:48.814186 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:50:48 crc kubenswrapper[4896]: I0126 16:50:48.814840 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:51:18 crc kubenswrapper[4896]: I0126 16:51:18.818426 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:51:18 crc kubenswrapper[4896]: I0126 16:51:18.819502 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:51:18 crc kubenswrapper[4896]: I0126 16:51:18.819658 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:51:18 crc kubenswrapper[4896]: I0126 16:51:18.826985 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:51:18 crc kubenswrapper[4896]: I0126 16:51:18.827156 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" gracePeriod=600 Jan 26 16:51:18 crc kubenswrapper[4896]: E0126 16:51:18.983451 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:51:19 crc kubenswrapper[4896]: I0126 16:51:19.898746 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" exitCode=0 Jan 26 16:51:19 crc kubenswrapper[4896]: I0126 16:51:19.898799 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108"} Jan 26 16:51:19 crc kubenswrapper[4896]: I0126 16:51:19.898923 4896 scope.go:117] "RemoveContainer" containerID="40dc426aa0b987e6473edb4bdd90d8989ebb10df3bcf2f43fe2403dd99075c70" Jan 26 16:51:19 crc kubenswrapper[4896]: I0126 16:51:19.899850 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:51:19 crc kubenswrapper[4896]: E0126 16:51:19.900391 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.604200 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:51:30 crc kubenswrapper[4896]: E0126 16:51:30.605283 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="registry-server" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.605299 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="registry-server" Jan 26 16:51:30 crc kubenswrapper[4896]: E0126 16:51:30.605324 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="extract-content" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.605330 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="extract-content" Jan 26 16:51:30 crc kubenswrapper[4896]: E0126 16:51:30.605358 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="extract-utilities" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.605365 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="extract-utilities" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.605674 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="75492d6f-24c8-44e7-8864-a5afc7cd8e7d" containerName="registry-server" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.607481 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.636957 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.759739 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:51:30 crc kubenswrapper[4896]: E0126 16:51:30.760121 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.794204 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9t8j\" (UniqueName: \"kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.794927 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.795262 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.898523 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9t8j\" (UniqueName: \"kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.899164 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.899667 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.899857 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.901179 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.918175 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9t8j\" (UniqueName: \"kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j\") pod \"redhat-operators-6jg9d\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:30 crc kubenswrapper[4896]: I0126 16:51:30.929229 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:31 crc kubenswrapper[4896]: I0126 16:51:31.860005 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:51:35 crc kubenswrapper[4896]: I0126 16:51:35.101710 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerStarted","Data":"18db73dd9c04fda32d9e2f45896ed5256dc8e769d75915708ad3a3dd8e0f89d3"} Jan 26 16:51:36 crc kubenswrapper[4896]: I0126 16:51:36.114430 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab636d50-661f-4957-bff5-82423169a66a" containerID="866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e" exitCode=0 Jan 26 16:51:36 crc kubenswrapper[4896]: I0126 16:51:36.114494 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerDied","Data":"866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e"} Jan 26 16:51:37 crc kubenswrapper[4896]: I0126 16:51:37.132784 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerStarted","Data":"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95"} Jan 26 16:51:41 crc kubenswrapper[4896]: I0126 16:51:41.183395 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab636d50-661f-4957-bff5-82423169a66a" containerID="4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95" exitCode=0 Jan 26 16:51:41 crc kubenswrapper[4896]: I0126 16:51:41.183485 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerDied","Data":"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95"} Jan 26 16:51:42 crc kubenswrapper[4896]: I0126 16:51:42.211378 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerStarted","Data":"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1"} Jan 26 16:51:42 crc kubenswrapper[4896]: I0126 16:51:42.246153 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jg9d" podStartSLOduration=6.750638899 podStartE2EDuration="12.246113681s" podCreationTimestamp="2026-01-26 16:51:30 +0000 UTC" firstStartedPulling="2026-01-26 16:51:36.116673614 +0000 UTC m=+4653.898554007" lastFinishedPulling="2026-01-26 16:51:41.612148396 +0000 UTC m=+4659.394028789" observedRunningTime="2026-01-26 16:51:42.234960618 +0000 UTC m=+4660.016841001" watchObservedRunningTime="2026-01-26 16:51:42.246113681 +0000 UTC m=+4660.027994064" Jan 26 16:51:45 crc kubenswrapper[4896]: I0126 16:51:45.759930 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:51:45 crc kubenswrapper[4896]: E0126 16:51:45.761103 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:51:50 crc kubenswrapper[4896]: I0126 16:51:50.930302 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:50 crc kubenswrapper[4896]: I0126 16:51:50.930948 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:51:51 crc kubenswrapper[4896]: I0126 16:51:51.983563 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jg9d" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="registry-server" probeResult="failure" output=< Jan 26 16:51:51 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 16:51:51 crc kubenswrapper[4896]: > Jan 26 16:51:57 crc kubenswrapper[4896]: I0126 16:51:57.759212 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:51:57 crc kubenswrapper[4896]: E0126 16:51:57.760165 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:52:00 crc kubenswrapper[4896]: I0126 16:52:00.983692 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:52:01 crc kubenswrapper[4896]: I0126 16:52:01.040306 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:52:01 crc kubenswrapper[4896]: I0126 16:52:01.800097 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:52:02 crc kubenswrapper[4896]: I0126 16:52:02.757073 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jg9d" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="registry-server" containerID="cri-o://49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1" gracePeriod=2 Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.333425 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.450153 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities\") pod \"ab636d50-661f-4957-bff5-82423169a66a\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.450559 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content\") pod \"ab636d50-661f-4957-bff5-82423169a66a\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.450839 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9t8j\" (UniqueName: \"kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j\") pod \"ab636d50-661f-4957-bff5-82423169a66a\" (UID: \"ab636d50-661f-4957-bff5-82423169a66a\") " Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.451257 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities" (OuterVolumeSpecName: "utilities") pod "ab636d50-661f-4957-bff5-82423169a66a" (UID: "ab636d50-661f-4957-bff5-82423169a66a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.472355 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j" (OuterVolumeSpecName: "kube-api-access-m9t8j") pod "ab636d50-661f-4957-bff5-82423169a66a" (UID: "ab636d50-661f-4957-bff5-82423169a66a"). InnerVolumeSpecName "kube-api-access-m9t8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.555247 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9t8j\" (UniqueName: \"kubernetes.io/projected/ab636d50-661f-4957-bff5-82423169a66a-kube-api-access-m9t8j\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.555292 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.583110 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab636d50-661f-4957-bff5-82423169a66a" (UID: "ab636d50-661f-4957-bff5-82423169a66a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.658949 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab636d50-661f-4957-bff5-82423169a66a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.775027 4896 generic.go:334] "Generic (PLEG): container finished" podID="ab636d50-661f-4957-bff5-82423169a66a" containerID="49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1" exitCode=0 Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.776202 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jg9d" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.776684 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerDied","Data":"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1"} Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.776732 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jg9d" event={"ID":"ab636d50-661f-4957-bff5-82423169a66a","Type":"ContainerDied","Data":"18db73dd9c04fda32d9e2f45896ed5256dc8e769d75915708ad3a3dd8e0f89d3"} Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.776755 4896 scope.go:117] "RemoveContainer" containerID="49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.814794 4896 scope.go:117] "RemoveContainer" containerID="4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.821894 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.834591 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jg9d"] Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.877797 4896 scope.go:117] "RemoveContainer" containerID="866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.922636 4896 scope.go:117] "RemoveContainer" containerID="49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1" Jan 26 16:52:03 crc kubenswrapper[4896]: E0126 16:52:03.923197 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1\": container with ID starting with 49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1 not found: ID does not exist" containerID="49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.923248 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1"} err="failed to get container status \"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1\": rpc error: code = NotFound desc = could not find container \"49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1\": container with ID starting with 49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1 not found: ID does not exist" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.923282 4896 scope.go:117] "RemoveContainer" containerID="4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95" Jan 26 16:52:03 crc kubenswrapper[4896]: E0126 16:52:03.923536 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95\": container with ID starting with 4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95 not found: ID does not exist" containerID="4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.923565 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95"} err="failed to get container status \"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95\": rpc error: code = NotFound desc = could not find container \"4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95\": container with ID starting with 4833865cebd7cff0c3fa39d389e703cc6251fdc80d9ee79f2a7e6ead24073b95 not found: ID does not exist" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.923735 4896 scope.go:117] "RemoveContainer" containerID="866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e" Jan 26 16:52:03 crc kubenswrapper[4896]: E0126 16:52:03.924014 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e\": container with ID starting with 866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e not found: ID does not exist" containerID="866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e" Jan 26 16:52:03 crc kubenswrapper[4896]: I0126 16:52:03.924042 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e"} err="failed to get container status \"866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e\": rpc error: code = NotFound desc = could not find container \"866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e\": container with ID starting with 866a133b803fd9100aea6fc76f44d8cc4102a71d55c5e083ad34645f6bbb492e not found: ID does not exist" Jan 26 16:52:04 crc kubenswrapper[4896]: I0126 16:52:04.772879 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab636d50-661f-4957-bff5-82423169a66a" path="/var/lib/kubelet/pods/ab636d50-661f-4957-bff5-82423169a66a/volumes" Jan 26 16:52:09 crc kubenswrapper[4896]: I0126 16:52:09.760010 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:52:09 crc kubenswrapper[4896]: E0126 16:52:09.761157 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.564015 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:10 crc kubenswrapper[4896]: E0126 16:52:10.564696 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="extract-utilities" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.564717 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="extract-utilities" Jan 26 16:52:10 crc kubenswrapper[4896]: E0126 16:52:10.564765 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="registry-server" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.564772 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="registry-server" Jan 26 16:52:10 crc kubenswrapper[4896]: E0126 16:52:10.564787 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="extract-content" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.564795 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="extract-content" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.565061 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab636d50-661f-4957-bff5-82423169a66a" containerName="registry-server" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.566876 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.578855 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.645109 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.645466 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zr66\" (UniqueName: \"kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.645627 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.748491 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.749030 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zr66\" (UniqueName: \"kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.749092 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.750027 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.750175 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.792166 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zr66\" (UniqueName: \"kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66\") pod \"redhat-marketplace-jjgvw\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:10 crc kubenswrapper[4896]: I0126 16:52:10.898445 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:11 crc kubenswrapper[4896]: I0126 16:52:11.442081 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:11 crc kubenswrapper[4896]: I0126 16:52:11.888721 4896 generic.go:334] "Generic (PLEG): container finished" podID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerID="89c3c726718df7d6047159db2207073a427d3f388de10c2833b8a5692945f85a" exitCode=0 Jan 26 16:52:11 crc kubenswrapper[4896]: I0126 16:52:11.894254 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerDied","Data":"89c3c726718df7d6047159db2207073a427d3f388de10c2833b8a5692945f85a"} Jan 26 16:52:11 crc kubenswrapper[4896]: I0126 16:52:11.894589 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerStarted","Data":"1361d4150892203fa019b72714e2947e21645ae8ccc528bacd21fbb45376685a"} Jan 26 16:52:12 crc kubenswrapper[4896]: E0126 16:52:12.350870 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:12 crc kubenswrapper[4896]: I0126 16:52:12.907335 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerStarted","Data":"9112fd78e4dc1fb195dfcbbc7730ad8b12612183e78cdbcac16c2616c57d4c71"} Jan 26 16:52:13 crc kubenswrapper[4896]: I0126 16:52:13.920672 4896 generic.go:334] "Generic (PLEG): container finished" podID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerID="9112fd78e4dc1fb195dfcbbc7730ad8b12612183e78cdbcac16c2616c57d4c71" exitCode=0 Jan 26 16:52:13 crc kubenswrapper[4896]: I0126 16:52:13.920721 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerDied","Data":"9112fd78e4dc1fb195dfcbbc7730ad8b12612183e78cdbcac16c2616c57d4c71"} Jan 26 16:52:14 crc kubenswrapper[4896]: E0126 16:52:14.799520 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:15 crc kubenswrapper[4896]: I0126 16:52:15.955125 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerStarted","Data":"a88ad77d548a5e6a07241890ee8e9db1aa727aca27397e71ca2d9f2aef54ea99"} Jan 26 16:52:15 crc kubenswrapper[4896]: I0126 16:52:15.978830 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jjgvw" podStartSLOduration=3.48263948 podStartE2EDuration="5.97880564s" podCreationTimestamp="2026-01-26 16:52:10 +0000 UTC" firstStartedPulling="2026-01-26 16:52:11.897766149 +0000 UTC m=+4689.679646542" lastFinishedPulling="2026-01-26 16:52:14.393932309 +0000 UTC m=+4692.175812702" observedRunningTime="2026-01-26 16:52:15.971287416 +0000 UTC m=+4693.753167829" watchObservedRunningTime="2026-01-26 16:52:15.97880564 +0000 UTC m=+4693.760686033" Jan 26 16:52:20 crc kubenswrapper[4896]: I0126 16:52:20.898710 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:20 crc kubenswrapper[4896]: I0126 16:52:20.899261 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:20 crc kubenswrapper[4896]: I0126 16:52:20.950601 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:21 crc kubenswrapper[4896]: I0126 16:52:21.063143 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:21 crc kubenswrapper[4896]: I0126 16:52:21.204627 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:21 crc kubenswrapper[4896]: I0126 16:52:21.759711 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:52:21 crc kubenswrapper[4896]: E0126 16:52:21.760346 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:52:22 crc kubenswrapper[4896]: E0126 16:52:22.676029 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:23 crc kubenswrapper[4896]: I0126 16:52:23.025465 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jjgvw" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="registry-server" containerID="cri-o://a88ad77d548a5e6a07241890ee8e9db1aa727aca27397e71ca2d9f2aef54ea99" gracePeriod=2 Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.038624 4896 generic.go:334] "Generic (PLEG): container finished" podID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerID="a88ad77d548a5e6a07241890ee8e9db1aa727aca27397e71ca2d9f2aef54ea99" exitCode=0 Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.038737 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerDied","Data":"a88ad77d548a5e6a07241890ee8e9db1aa727aca27397e71ca2d9f2aef54ea99"} Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.602631 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.775865 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities\") pod \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.776089 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zr66\" (UniqueName: \"kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66\") pod \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.776411 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content\") pod \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\" (UID: \"d1c70f9f-7ea0-484b-a6fb-c233dca23610\") " Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.776733 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities" (OuterVolumeSpecName: "utilities") pod "d1c70f9f-7ea0-484b-a6fb-c233dca23610" (UID: "d1c70f9f-7ea0-484b-a6fb-c233dca23610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.777142 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.784318 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66" (OuterVolumeSpecName: "kube-api-access-6zr66") pod "d1c70f9f-7ea0-484b-a6fb-c233dca23610" (UID: "d1c70f9f-7ea0-484b-a6fb-c233dca23610"). InnerVolumeSpecName "kube-api-access-6zr66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.799981 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1c70f9f-7ea0-484b-a6fb-c233dca23610" (UID: "d1c70f9f-7ea0-484b-a6fb-c233dca23610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.881168 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c70f9f-7ea0-484b-a6fb-c233dca23610-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:24 crc kubenswrapper[4896]: I0126 16:52:24.881205 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zr66\" (UniqueName: \"kubernetes.io/projected/d1c70f9f-7ea0-484b-a6fb-c233dca23610-kube-api-access-6zr66\") on node \"crc\" DevicePath \"\"" Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.053271 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjgvw" event={"ID":"d1c70f9f-7ea0-484b-a6fb-c233dca23610","Type":"ContainerDied","Data":"1361d4150892203fa019b72714e2947e21645ae8ccc528bacd21fbb45376685a"} Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.053335 4896 scope.go:117] "RemoveContainer" containerID="a88ad77d548a5e6a07241890ee8e9db1aa727aca27397e71ca2d9f2aef54ea99" Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.053519 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjgvw" Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.080985 4896 scope.go:117] "RemoveContainer" containerID="9112fd78e4dc1fb195dfcbbc7730ad8b12612183e78cdbcac16c2616c57d4c71" Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.109626 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.113853 4896 scope.go:117] "RemoveContainer" containerID="89c3c726718df7d6047159db2207073a427d3f388de10c2833b8a5692945f85a" Jan 26 16:52:25 crc kubenswrapper[4896]: I0126 16:52:25.135020 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjgvw"] Jan 26 16:52:26 crc kubenswrapper[4896]: I0126 16:52:26.773283 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" path="/var/lib/kubelet/pods/d1c70f9f-7ea0-484b-a6fb-c233dca23610/volumes" Jan 26 16:52:30 crc kubenswrapper[4896]: E0126 16:52:30.042348 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:32 crc kubenswrapper[4896]: E0126 16:52:32.738493 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:32 crc kubenswrapper[4896]: I0126 16:52:32.769502 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:52:32 crc kubenswrapper[4896]: E0126 16:52:32.769926 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:52:43 crc kubenswrapper[4896]: E0126 16:52:43.076170 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:44 crc kubenswrapper[4896]: E0126 16:52:44.799124 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:45 crc kubenswrapper[4896]: I0126 16:52:45.760746 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:52:45 crc kubenswrapper[4896]: E0126 16:52:45.761412 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:52:48 crc kubenswrapper[4896]: E0126 16:52:48.331260 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:48 crc kubenswrapper[4896]: E0126 16:52:48.331474 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:53 crc kubenswrapper[4896]: E0126 16:52:53.239984 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:52:56 crc kubenswrapper[4896]: I0126 16:52:56.760398 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:52:56 crc kubenswrapper[4896]: E0126 16:52:56.761261 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:53:00 crc kubenswrapper[4896]: E0126 16:53:00.119752 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-conmon-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab636d50_661f_4957_bff5_82423169a66a.slice/crio-49db73bf26b75a906aa8d6c0cd388a9473c52d79db4a4656373db6dd8360a2e1.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 16:53:07 crc kubenswrapper[4896]: I0126 16:53:07.760648 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:53:07 crc kubenswrapper[4896]: E0126 16:53:07.764238 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:53:22 crc kubenswrapper[4896]: I0126 16:53:22.771404 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:53:22 crc kubenswrapper[4896]: E0126 16:53:22.772494 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:53:33 crc kubenswrapper[4896]: I0126 16:53:33.760606 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:53:33 crc kubenswrapper[4896]: E0126 16:53:33.762591 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:53:47 crc kubenswrapper[4896]: I0126 16:53:47.760625 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:53:47 crc kubenswrapper[4896]: E0126 16:53:47.762037 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:53:59 crc kubenswrapper[4896]: I0126 16:53:59.761390 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:53:59 crc kubenswrapper[4896]: E0126 16:53:59.771132 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:54:12 crc kubenswrapper[4896]: I0126 16:54:12.808362 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:54:12 crc kubenswrapper[4896]: E0126 16:54:12.809446 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:54:27 crc kubenswrapper[4896]: I0126 16:54:27.853950 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:54:27 crc kubenswrapper[4896]: E0126 16:54:27.860473 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:54:41 crc kubenswrapper[4896]: I0126 16:54:41.759451 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:54:41 crc kubenswrapper[4896]: E0126 16:54:41.760616 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.108991 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:54:52 crc kubenswrapper[4896]: E0126 16:54:52.110130 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="extract-content" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.110148 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="extract-content" Jan 26 16:54:52 crc kubenswrapper[4896]: E0126 16:54:52.110169 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="extract-utilities" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.110175 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="extract-utilities" Jan 26 16:54:52 crc kubenswrapper[4896]: E0126 16:54:52.110194 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="registry-server" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.110200 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="registry-server" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.110453 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c70f9f-7ea0-484b-a6fb-c233dca23610" containerName="registry-server" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.112898 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.131150 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.287446 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.287838 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72l6\" (UniqueName: \"kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.288043 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.390495 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j72l6\" (UniqueName: \"kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.390567 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.390753 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.391383 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.391383 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.422750 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72l6\" (UniqueName: \"kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6\") pod \"certified-operators-54n98\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:52 crc kubenswrapper[4896]: I0126 16:54:52.437322 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:54:53 crc kubenswrapper[4896]: I0126 16:54:53.149999 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:54:54 crc kubenswrapper[4896]: I0126 16:54:54.016047 4896 generic.go:334] "Generic (PLEG): container finished" podID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerID="36f3490218ff13216ba47638929c865f255a8fad4f36aec0f1a78cce38510d5d" exitCode=0 Jan 26 16:54:54 crc kubenswrapper[4896]: I0126 16:54:54.016181 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerDied","Data":"36f3490218ff13216ba47638929c865f255a8fad4f36aec0f1a78cce38510d5d"} Jan 26 16:54:54 crc kubenswrapper[4896]: I0126 16:54:54.016403 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerStarted","Data":"08977c6b6d919ce3b84e16de55fc9bb972266ab67030dba6ed6ed4ff8ac84ca4"} Jan 26 16:54:54 crc kubenswrapper[4896]: I0126 16:54:54.018537 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:54:56 crc kubenswrapper[4896]: I0126 16:54:56.760904 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:54:56 crc kubenswrapper[4896]: E0126 16:54:56.761781 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:54:58 crc kubenswrapper[4896]: I0126 16:54:58.071985 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerStarted","Data":"8e8454b81b47bf032190bb2b9016aa1a378ba4b113e936154c5659190a79962b"} Jan 26 16:54:59 crc kubenswrapper[4896]: I0126 16:54:59.087366 4896 generic.go:334] "Generic (PLEG): container finished" podID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerID="8e8454b81b47bf032190bb2b9016aa1a378ba4b113e936154c5659190a79962b" exitCode=0 Jan 26 16:54:59 crc kubenswrapper[4896]: I0126 16:54:59.087886 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerDied","Data":"8e8454b81b47bf032190bb2b9016aa1a378ba4b113e936154c5659190a79962b"} Jan 26 16:55:00 crc kubenswrapper[4896]: I0126 16:55:00.138504 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerStarted","Data":"5f953dfde354e733c1b03f1a8ddfd2353fd2e7fce131bd3de7abdc372f5f17ee"} Jan 26 16:55:00 crc kubenswrapper[4896]: I0126 16:55:00.179233 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-54n98" podStartSLOduration=2.642614283 podStartE2EDuration="8.17918806s" podCreationTimestamp="2026-01-26 16:54:52 +0000 UTC" firstStartedPulling="2026-01-26 16:54:54.018214761 +0000 UTC m=+4851.800095154" lastFinishedPulling="2026-01-26 16:54:59.554788518 +0000 UTC m=+4857.336668931" observedRunningTime="2026-01-26 16:55:00.162070812 +0000 UTC m=+4857.943951205" watchObservedRunningTime="2026-01-26 16:55:00.17918806 +0000 UTC m=+4857.961068453" Jan 26 16:55:02 crc kubenswrapper[4896]: I0126 16:55:02.437708 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:02 crc kubenswrapper[4896]: I0126 16:55:02.438333 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:03 crc kubenswrapper[4896]: I0126 16:55:03.185110 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:11 crc kubenswrapper[4896]: I0126 16:55:11.760068 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:55:11 crc kubenswrapper[4896]: E0126 16:55:11.760905 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:55:13 crc kubenswrapper[4896]: I0126 16:55:13.122439 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:15 crc kubenswrapper[4896]: I0126 16:55:15.817841 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:55:15 crc kubenswrapper[4896]: I0126 16:55:15.818515 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-54n98" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="registry-server" containerID="cri-o://5f953dfde354e733c1b03f1a8ddfd2353fd2e7fce131bd3de7abdc372f5f17ee" gracePeriod=2 Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.339722 4896 generic.go:334] "Generic (PLEG): container finished" podID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerID="5f953dfde354e733c1b03f1a8ddfd2353fd2e7fce131bd3de7abdc372f5f17ee" exitCode=0 Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.340237 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerDied","Data":"5f953dfde354e733c1b03f1a8ddfd2353fd2e7fce131bd3de7abdc372f5f17ee"} Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.340280 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54n98" event={"ID":"0d66301c-d8ea-4314-8eda-ee43b649e4ea","Type":"ContainerDied","Data":"08977c6b6d919ce3b84e16de55fc9bb972266ab67030dba6ed6ed4ff8ac84ca4"} Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.340311 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08977c6b6d919ce3b84e16de55fc9bb972266ab67030dba6ed6ed4ff8ac84ca4" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.442734 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.545400 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities\") pod \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.545500 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content\") pod \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.545671 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j72l6\" (UniqueName: \"kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6\") pod \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\" (UID: \"0d66301c-d8ea-4314-8eda-ee43b649e4ea\") " Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.546003 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities" (OuterVolumeSpecName: "utilities") pod "0d66301c-d8ea-4314-8eda-ee43b649e4ea" (UID: "0d66301c-d8ea-4314-8eda-ee43b649e4ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.546795 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.553971 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6" (OuterVolumeSpecName: "kube-api-access-j72l6") pod "0d66301c-d8ea-4314-8eda-ee43b649e4ea" (UID: "0d66301c-d8ea-4314-8eda-ee43b649e4ea"). InnerVolumeSpecName "kube-api-access-j72l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.606708 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d66301c-d8ea-4314-8eda-ee43b649e4ea" (UID: "0d66301c-d8ea-4314-8eda-ee43b649e4ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.651221 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d66301c-d8ea-4314-8eda-ee43b649e4ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:55:16 crc kubenswrapper[4896]: I0126 16:55:16.651283 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j72l6\" (UniqueName: \"kubernetes.io/projected/0d66301c-d8ea-4314-8eda-ee43b649e4ea-kube-api-access-j72l6\") on node \"crc\" DevicePath \"\"" Jan 26 16:55:17 crc kubenswrapper[4896]: I0126 16:55:17.351105 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54n98" Jan 26 16:55:17 crc kubenswrapper[4896]: I0126 16:55:17.382030 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:55:17 crc kubenswrapper[4896]: I0126 16:55:17.392501 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-54n98"] Jan 26 16:55:18 crc kubenswrapper[4896]: I0126 16:55:18.776803 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" path="/var/lib/kubelet/pods/0d66301c-d8ea-4314-8eda-ee43b649e4ea/volumes" Jan 26 16:55:24 crc kubenswrapper[4896]: I0126 16:55:24.760530 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:55:24 crc kubenswrapper[4896]: E0126 16:55:24.761550 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:55:37 crc kubenswrapper[4896]: I0126 16:55:37.761497 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:55:37 crc kubenswrapper[4896]: E0126 16:55:37.762384 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:55:49 crc kubenswrapper[4896]: I0126 16:55:49.760554 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:55:49 crc kubenswrapper[4896]: E0126 16:55:49.763008 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:56:02 crc kubenswrapper[4896]: I0126 16:56:02.783872 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:56:02 crc kubenswrapper[4896]: E0126 16:56:02.784961 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:56:13 crc kubenswrapper[4896]: I0126 16:56:13.759759 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:56:13 crc kubenswrapper[4896]: E0126 16:56:13.760732 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 16:56:25 crc kubenswrapper[4896]: I0126 16:56:25.760570 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:56:27 crc kubenswrapper[4896]: I0126 16:56:27.394421 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f"} Jan 26 16:57:20 crc kubenswrapper[4896]: I0126 16:57:20.472052 4896 trace.go:236] Trace[1330498868]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (26-Jan-2026 16:57:18.957) (total time: 1514ms): Jan 26 16:57:20 crc kubenswrapper[4896]: Trace[1330498868]: [1.514904729s] [1.514904729s] END Jan 26 16:57:20 crc kubenswrapper[4896]: I0126 16:57:20.472060 4896 trace.go:236] Trace[1546617517]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (26-Jan-2026 16:57:12.133) (total time: 8338ms): Jan 26 16:57:20 crc kubenswrapper[4896]: Trace[1546617517]: [8.338308678s] [8.338308678s] END Jan 26 16:58:48 crc kubenswrapper[4896]: I0126 16:58:48.814164 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:58:48 crc kubenswrapper[4896]: I0126 16:58:48.814825 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:59:18 crc kubenswrapper[4896]: I0126 16:59:18.813460 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:59:18 crc kubenswrapper[4896]: I0126 16:59:18.814036 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:59:48 crc kubenswrapper[4896]: I0126 16:59:48.814209 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:59:48 crc kubenswrapper[4896]: I0126 16:59:48.814944 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:59:48 crc kubenswrapper[4896]: I0126 16:59:48.815082 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 16:59:48 crc kubenswrapper[4896]: I0126 16:59:48.816267 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:59:48 crc kubenswrapper[4896]: I0126 16:59:48.816385 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f" gracePeriod=600 Jan 26 16:59:49 crc kubenswrapper[4896]: I0126 16:59:49.039449 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f" exitCode=0 Jan 26 16:59:49 crc kubenswrapper[4896]: I0126 16:59:49.039503 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f"} Jan 26 16:59:49 crc kubenswrapper[4896]: I0126 16:59:49.039564 4896 scope.go:117] "RemoveContainer" containerID="cf36efca207d093e795c7b422a705befe47507f8b3fbdbe22aba88999b487108" Jan 26 16:59:50 crc kubenswrapper[4896]: I0126 16:59:50.053276 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec"} Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.204606 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz"] Jan 26 17:00:00 crc kubenswrapper[4896]: E0126 17:00:00.207078 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.207196 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4896]: E0126 17:00:00.207312 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.207398 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4896]: E0126 17:00:00.207486 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.207574 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.208031 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d66301c-d8ea-4314-8eda-ee43b649e4ea" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.209284 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.233864 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.234168 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.274643 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz"] Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.314425 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.314523 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5bz\" (UniqueName: \"kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.314656 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.417059 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.417345 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.418099 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.418222 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5bz\" (UniqueName: \"kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.575632 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.575697 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5bz\" (UniqueName: \"kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz\") pod \"collect-profiles-29490780-96mgz\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:00 crc kubenswrapper[4896]: I0126 17:00:00.866696 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:01 crc kubenswrapper[4896]: I0126 17:00:01.436608 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz"] Jan 26 17:00:02 crc kubenswrapper[4896]: I0126 17:00:02.226400 4896 generic.go:334] "Generic (PLEG): container finished" podID="b7cbba2f-5285-46a0-8655-59ebc7ad2f21" containerID="fef96db1a261eab85aa2f02da72f78f53889f64d0701636e4547197512427954" exitCode=0 Jan 26 17:00:02 crc kubenswrapper[4896]: I0126 17:00:02.226534 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" event={"ID":"b7cbba2f-5285-46a0-8655-59ebc7ad2f21","Type":"ContainerDied","Data":"fef96db1a261eab85aa2f02da72f78f53889f64d0701636e4547197512427954"} Jan 26 17:00:02 crc kubenswrapper[4896]: I0126 17:00:02.226855 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" event={"ID":"b7cbba2f-5285-46a0-8655-59ebc7ad2f21","Type":"ContainerStarted","Data":"cd118d06c947dd6a64534de7b9585cee3c485601c68d374781a08131674aa881"} Jan 26 17:00:03 crc kubenswrapper[4896]: I0126 17:00:03.889106 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.017326 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume\") pod \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.017371 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume\") pod \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.017650 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k5bz\" (UniqueName: \"kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz\") pod \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\" (UID: \"b7cbba2f-5285-46a0-8655-59ebc7ad2f21\") " Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.018202 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7cbba2f-5285-46a0-8655-59ebc7ad2f21" (UID: "b7cbba2f-5285-46a0-8655-59ebc7ad2f21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.018359 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.023348 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz" (OuterVolumeSpecName: "kube-api-access-5k5bz") pod "b7cbba2f-5285-46a0-8655-59ebc7ad2f21" (UID: "b7cbba2f-5285-46a0-8655-59ebc7ad2f21"). InnerVolumeSpecName "kube-api-access-5k5bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.023690 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7cbba2f-5285-46a0-8655-59ebc7ad2f21" (UID: "b7cbba2f-5285-46a0-8655-59ebc7ad2f21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.121691 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k5bz\" (UniqueName: \"kubernetes.io/projected/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-kube-api-access-5k5bz\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.121729 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7cbba2f-5285-46a0-8655-59ebc7ad2f21-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.254184 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" event={"ID":"b7cbba2f-5285-46a0-8655-59ebc7ad2f21","Type":"ContainerDied","Data":"cd118d06c947dd6a64534de7b9585cee3c485601c68d374781a08131674aa881"} Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.254284 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd118d06c947dd6a64534de7b9585cee3c485601c68d374781a08131674aa881" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.254349 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz" Jan 26 17:00:04 crc kubenswrapper[4896]: I0126 17:00:04.997802 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck"] Jan 26 17:00:05 crc kubenswrapper[4896]: I0126 17:00:05.007761 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-cs5ck"] Jan 26 17:00:06 crc kubenswrapper[4896]: I0126 17:00:06.775533 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f8abe2-9470-4da3-9a2e-b0d73355d416" path="/var/lib/kubelet/pods/07f8abe2-9470-4da3-9a2e-b0d73355d416/volumes" Jan 26 17:00:26 crc kubenswrapper[4896]: I0126 17:00:26.396523 4896 patch_prober.go:28] interesting pod/oauth-openshift-6d4bd77db6-j8xrv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:00:26 crc kubenswrapper[4896]: I0126 17:00:26.398276 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6d4bd77db6-j8xrv" podUID="855da462-519a-4fe1-b51b-ae4e1adfdb62" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:00:28 crc kubenswrapper[4896]: I0126 17:00:28.525463 4896 scope.go:117] "RemoveContainer" containerID="c2a7b37c49a1c7d9a1de5c402ecba7c2fb9d975ab723abae6ad0687616eb36aa" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.164032 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490781-5xj4d"] Jan 26 17:01:00 crc kubenswrapper[4896]: E0126 17:01:00.165361 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cbba2f-5285-46a0-8655-59ebc7ad2f21" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.165382 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cbba2f-5285-46a0-8655-59ebc7ad2f21" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.165760 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cbba2f-5285-46a0-8655-59ebc7ad2f21" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.166862 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.176925 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-5xj4d"] Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.283075 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.283177 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.283208 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztpp\" (UniqueName: \"kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.284344 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.386699 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.386854 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.386886 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qztpp\" (UniqueName: \"kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.386932 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.393917 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.396779 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.402541 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.405469 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qztpp\" (UniqueName: \"kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp\") pod \"keystone-cron-29490781-5xj4d\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:00 crc kubenswrapper[4896]: I0126 17:01:00.513172 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:01 crc kubenswrapper[4896]: I0126 17:01:01.017507 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-5xj4d"] Jan 26 17:01:01 crc kubenswrapper[4896]: I0126 17:01:01.061280 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-5xj4d" event={"ID":"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b","Type":"ContainerStarted","Data":"220a90e81a5e29ca1816e0d9ecbb938ef9a171b9a7fd7ab2e1e5df0621a7381e"} Jan 26 17:01:02 crc kubenswrapper[4896]: I0126 17:01:02.074435 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-5xj4d" event={"ID":"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b","Type":"ContainerStarted","Data":"5bacbd375c26e4245820672571bcb08ca259daeada91952f7daab95d47bd1b22"} Jan 26 17:01:02 crc kubenswrapper[4896]: I0126 17:01:02.102074 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490781-5xj4d" podStartSLOduration=2.102038955 podStartE2EDuration="2.102038955s" podCreationTimestamp="2026-01-26 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:02.092187755 +0000 UTC m=+5219.874068138" watchObservedRunningTime="2026-01-26 17:01:02.102038955 +0000 UTC m=+5219.883919338" Jan 26 17:01:05 crc kubenswrapper[4896]: I0126 17:01:05.110813 4896 generic.go:334] "Generic (PLEG): container finished" podID="7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" containerID="5bacbd375c26e4245820672571bcb08ca259daeada91952f7daab95d47bd1b22" exitCode=0 Jan 26 17:01:05 crc kubenswrapper[4896]: I0126 17:01:05.110897 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-5xj4d" event={"ID":"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b","Type":"ContainerDied","Data":"5bacbd375c26e4245820672571bcb08ca259daeada91952f7daab95d47bd1b22"} Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.429284 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.561031 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qztpp\" (UniqueName: \"kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp\") pod \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.561303 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle\") pod \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.561424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data\") pod \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.561683 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys\") pod \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\" (UID: \"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b\") " Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.569052 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp" (OuterVolumeSpecName: "kube-api-access-qztpp") pod "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" (UID: "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b"). InnerVolumeSpecName "kube-api-access-qztpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.569818 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" (UID: "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.614068 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" (UID: "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.642820 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data" (OuterVolumeSpecName: "config-data") pod "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" (UID: "7d0c2d03-9a63-45ae-b70f-bca3910ddb9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.666717 4896 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.666765 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qztpp\" (UniqueName: \"kubernetes.io/projected/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-kube-api-access-qztpp\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.666777 4896 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:07 crc kubenswrapper[4896]: I0126 17:01:07.666788 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0c2d03-9a63-45ae-b70f-bca3910ddb9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:08 crc kubenswrapper[4896]: I0126 17:01:08.289020 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-5xj4d" event={"ID":"7d0c2d03-9a63-45ae-b70f-bca3910ddb9b","Type":"ContainerDied","Data":"220a90e81a5e29ca1816e0d9ecbb938ef9a171b9a7fd7ab2e1e5df0621a7381e"} Jan 26 17:01:08 crc kubenswrapper[4896]: I0126 17:01:08.289080 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220a90e81a5e29ca1816e0d9ecbb938ef9a171b9a7fd7ab2e1e5df0621a7381e" Jan 26 17:01:08 crc kubenswrapper[4896]: I0126 17:01:08.289212 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-5xj4d" Jan 26 17:01:28 crc kubenswrapper[4896]: I0126 17:01:28.608235 4896 scope.go:117] "RemoveContainer" containerID="5f953dfde354e733c1b03f1a8ddfd2353fd2e7fce131bd3de7abdc372f5f17ee" Jan 26 17:01:28 crc kubenswrapper[4896]: I0126 17:01:28.634684 4896 scope.go:117] "RemoveContainer" containerID="8e8454b81b47bf032190bb2b9016aa1a378ba4b113e936154c5659190a79962b" Jan 26 17:01:28 crc kubenswrapper[4896]: I0126 17:01:28.679746 4896 scope.go:117] "RemoveContainer" containerID="36f3490218ff13216ba47638929c865f255a8fad4f36aec0f1a78cce38510d5d" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.626328 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:01:58 crc kubenswrapper[4896]: E0126 17:01:58.627660 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" containerName="keystone-cron" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.627679 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" containerName="keystone-cron" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.627994 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0c2d03-9a63-45ae-b70f-bca3910ddb9b" containerName="keystone-cron" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.630357 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.673307 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.988265 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwvm\" (UniqueName: \"kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.988370 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:58 crc kubenswrapper[4896]: I0126 17:01:58.988453 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.091417 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwvm\" (UniqueName: \"kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.091532 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.091654 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.092251 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.092286 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.125178 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwvm\" (UniqueName: \"kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm\") pod \"redhat-operators-sfgln\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.282226 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:01:59 crc kubenswrapper[4896]: I0126 17:01:59.831166 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:02:00 crc kubenswrapper[4896]: I0126 17:02:00.002258 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerStarted","Data":"d6b006bf8e083e594f53ed2d5162733f588d164cc8fdde4ef83c9ca8449ddf6e"} Jan 26 17:02:01 crc kubenswrapper[4896]: I0126 17:02:01.017869 4896 generic.go:334] "Generic (PLEG): container finished" podID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerID="7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556" exitCode=0 Jan 26 17:02:01 crc kubenswrapper[4896]: I0126 17:02:01.017986 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerDied","Data":"7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556"} Jan 26 17:02:01 crc kubenswrapper[4896]: I0126 17:02:01.022312 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:02:03 crc kubenswrapper[4896]: I0126 17:02:03.043649 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerStarted","Data":"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286"} Jan 26 17:02:08 crc kubenswrapper[4896]: I0126 17:02:08.134482 4896 generic.go:334] "Generic (PLEG): container finished" podID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerID="d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286" exitCode=0 Jan 26 17:02:08 crc kubenswrapper[4896]: I0126 17:02:08.134559 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerDied","Data":"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286"} Jan 26 17:02:09 crc kubenswrapper[4896]: I0126 17:02:09.148555 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerStarted","Data":"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51"} Jan 26 17:02:09 crc kubenswrapper[4896]: I0126 17:02:09.181456 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sfgln" podStartSLOduration=3.67187939 podStartE2EDuration="11.181426293s" podCreationTimestamp="2026-01-26 17:01:58 +0000 UTC" firstStartedPulling="2026-01-26 17:02:01.021981921 +0000 UTC m=+5278.803862314" lastFinishedPulling="2026-01-26 17:02:08.531528824 +0000 UTC m=+5286.313409217" observedRunningTime="2026-01-26 17:02:09.170713082 +0000 UTC m=+5286.952593485" watchObservedRunningTime="2026-01-26 17:02:09.181426293 +0000 UTC m=+5286.963306686" Jan 26 17:02:09 crc kubenswrapper[4896]: I0126 17:02:09.283864 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:09 crc kubenswrapper[4896]: I0126 17:02:09.283953 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:10 crc kubenswrapper[4896]: I0126 17:02:10.412799 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sfgln" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" probeResult="failure" output=< Jan 26 17:02:10 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:02:10 crc kubenswrapper[4896]: > Jan 26 17:02:18 crc kubenswrapper[4896]: I0126 17:02:18.813967 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:02:18 crc kubenswrapper[4896]: I0126 17:02:18.814432 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:02:20 crc kubenswrapper[4896]: I0126 17:02:20.351670 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sfgln" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" probeResult="failure" output=< Jan 26 17:02:20 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:02:20 crc kubenswrapper[4896]: > Jan 26 17:02:29 crc kubenswrapper[4896]: I0126 17:02:29.337139 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:29 crc kubenswrapper[4896]: I0126 17:02:29.389991 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:29 crc kubenswrapper[4896]: I0126 17:02:29.850938 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:02:30 crc kubenswrapper[4896]: I0126 17:02:30.608026 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sfgln" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" containerID="cri-o://95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51" gracePeriod=2 Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.144341 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.314879 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gwvm\" (UniqueName: \"kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm\") pod \"f1599b27-0bd8-4b26-9959-f52f89939f97\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.315303 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content\") pod \"f1599b27-0bd8-4b26-9959-f52f89939f97\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.315542 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities\") pod \"f1599b27-0bd8-4b26-9959-f52f89939f97\" (UID: \"f1599b27-0bd8-4b26-9959-f52f89939f97\") " Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.316524 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities" (OuterVolumeSpecName: "utilities") pod "f1599b27-0bd8-4b26-9959-f52f89939f97" (UID: "f1599b27-0bd8-4b26-9959-f52f89939f97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.337384 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm" (OuterVolumeSpecName: "kube-api-access-8gwvm") pod "f1599b27-0bd8-4b26-9959-f52f89939f97" (UID: "f1599b27-0bd8-4b26-9959-f52f89939f97"). InnerVolumeSpecName "kube-api-access-8gwvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.418111 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.418161 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gwvm\" (UniqueName: \"kubernetes.io/projected/f1599b27-0bd8-4b26-9959-f52f89939f97-kube-api-access-8gwvm\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.440872 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1599b27-0bd8-4b26-9959-f52f89939f97" (UID: "f1599b27-0bd8-4b26-9959-f52f89939f97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.521562 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1599b27-0bd8-4b26-9959-f52f89939f97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.541128 4896 generic.go:334] "Generic (PLEG): container finished" podID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerID="95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51" exitCode=0 Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.541180 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerDied","Data":"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51"} Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.541209 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgln" event={"ID":"f1599b27-0bd8-4b26-9959-f52f89939f97","Type":"ContainerDied","Data":"d6b006bf8e083e594f53ed2d5162733f588d164cc8fdde4ef83c9ca8449ddf6e"} Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.541234 4896 scope.go:117] "RemoveContainer" containerID="95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.541759 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgln" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.566718 4896 scope.go:117] "RemoveContainer" containerID="d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.584718 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.597520 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sfgln"] Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.606282 4896 scope.go:117] "RemoveContainer" containerID="7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.684779 4896 scope.go:117] "RemoveContainer" containerID="95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51" Jan 26 17:02:31 crc kubenswrapper[4896]: E0126 17:02:31.685438 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51\": container with ID starting with 95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51 not found: ID does not exist" containerID="95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.685481 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51"} err="failed to get container status \"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51\": rpc error: code = NotFound desc = could not find container \"95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51\": container with ID starting with 95dfd1af2dc2ea21022eaf4a9bd887bf831176bbc9f008029871b1341023fa51 not found: ID does not exist" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.685508 4896 scope.go:117] "RemoveContainer" containerID="d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286" Jan 26 17:02:31 crc kubenswrapper[4896]: E0126 17:02:31.685911 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286\": container with ID starting with d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286 not found: ID does not exist" containerID="d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.685932 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286"} err="failed to get container status \"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286\": rpc error: code = NotFound desc = could not find container \"d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286\": container with ID starting with d631ed46216a7a09dd7c1453ac2c70c32c1b68c669438069a245120e00b3f286 not found: ID does not exist" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.685948 4896 scope.go:117] "RemoveContainer" containerID="7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556" Jan 26 17:02:31 crc kubenswrapper[4896]: E0126 17:02:31.686211 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556\": container with ID starting with 7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556 not found: ID does not exist" containerID="7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556" Jan 26 17:02:31 crc kubenswrapper[4896]: I0126 17:02:31.686231 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556"} err="failed to get container status \"7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556\": rpc error: code = NotFound desc = could not find container \"7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556\": container with ID starting with 7fcfb1ae1c13145618ab0f1f40e1da81873c6eb0415f899a096205ab407f7556 not found: ID does not exist" Jan 26 17:02:32 crc kubenswrapper[4896]: I0126 17:02:32.774734 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" path="/var/lib/kubelet/pods/f1599b27-0bd8-4b26-9959-f52f89939f97/volumes" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.575136 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:44 crc kubenswrapper[4896]: E0126 17:02:44.576113 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="extract-content" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.576126 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="extract-content" Jan 26 17:02:44 crc kubenswrapper[4896]: E0126 17:02:44.576172 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.576180 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" Jan 26 17:02:44 crc kubenswrapper[4896]: E0126 17:02:44.576212 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="extract-utilities" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.576219 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="extract-utilities" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.576434 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1599b27-0bd8-4b26-9959-f52f89939f97" containerName="registry-server" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.578403 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.586744 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.673120 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.673774 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.673950 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wpn\" (UniqueName: \"kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.777585 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wpn\" (UniqueName: \"kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.777918 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.778085 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.778720 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.778770 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.811975 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wpn\" (UniqueName: \"kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn\") pod \"redhat-marketplace-qvzxd\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:44 crc kubenswrapper[4896]: I0126 17:02:44.898980 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:45 crc kubenswrapper[4896]: I0126 17:02:45.561875 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:45 crc kubenswrapper[4896]: I0126 17:02:45.901720 4896 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerID="6922c51a8745ae2c92085b2241aff915da99a5eee1d22dbeb4487cf1c1587ab0" exitCode=0 Jan 26 17:02:45 crc kubenswrapper[4896]: I0126 17:02:45.901776 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerDied","Data":"6922c51a8745ae2c92085b2241aff915da99a5eee1d22dbeb4487cf1c1587ab0"} Jan 26 17:02:45 crc kubenswrapper[4896]: I0126 17:02:45.901799 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerStarted","Data":"2ebe07a9ca379964292dc3b65ba26b5fdc949249af710fe38d8ceee5b1e4f0f3"} Jan 26 17:02:47 crc kubenswrapper[4896]: I0126 17:02:47.930092 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerStarted","Data":"5c23237fa17ecc1ab5b09d4c76ba8e7f3b445bc55ee71de43915c33b1cd33096"} Jan 26 17:02:48 crc kubenswrapper[4896]: I0126 17:02:48.814148 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:02:48 crc kubenswrapper[4896]: I0126 17:02:48.814535 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:02:48 crc kubenswrapper[4896]: I0126 17:02:48.945545 4896 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerID="5c23237fa17ecc1ab5b09d4c76ba8e7f3b445bc55ee71de43915c33b1cd33096" exitCode=0 Jan 26 17:02:48 crc kubenswrapper[4896]: I0126 17:02:48.945614 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerDied","Data":"5c23237fa17ecc1ab5b09d4c76ba8e7f3b445bc55ee71de43915c33b1cd33096"} Jan 26 17:02:50 crc kubenswrapper[4896]: I0126 17:02:50.970929 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerStarted","Data":"093430ac57e2e9bd1235460bef12036f919b97d3cf5964de59ee42b95e26f072"} Jan 26 17:02:51 crc kubenswrapper[4896]: I0126 17:02:51.005125 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvzxd" podStartSLOduration=3.558608919 podStartE2EDuration="7.005100837s" podCreationTimestamp="2026-01-26 17:02:44 +0000 UTC" firstStartedPulling="2026-01-26 17:02:45.9034994 +0000 UTC m=+5323.685379793" lastFinishedPulling="2026-01-26 17:02:49.349991318 +0000 UTC m=+5327.131871711" observedRunningTime="2026-01-26 17:02:50.993212207 +0000 UTC m=+5328.775092600" watchObservedRunningTime="2026-01-26 17:02:51.005100837 +0000 UTC m=+5328.786981230" Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.948496 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.954336 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.967289 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.967473 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.967659 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snhz\" (UniqueName: \"kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:52 crc kubenswrapper[4896]: I0126 17:02:52.969443 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.070599 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.070708 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snhz\" (UniqueName: \"kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.070849 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.071389 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.071453 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.091462 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snhz\" (UniqueName: \"kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz\") pod \"community-operators-kh2jb\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.284378 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:02:53 crc kubenswrapper[4896]: I0126 17:02:53.980046 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:02:54 crc kubenswrapper[4896]: I0126 17:02:54.899846 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:54 crc kubenswrapper[4896]: I0126 17:02:54.900443 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:54 crc kubenswrapper[4896]: I0126 17:02:54.959610 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:55 crc kubenswrapper[4896]: I0126 17:02:55.021445 4896 generic.go:334] "Generic (PLEG): container finished" podID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerID="eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894" exitCode=0 Jan 26 17:02:55 crc kubenswrapper[4896]: I0126 17:02:55.021715 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerDied","Data":"eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894"} Jan 26 17:02:55 crc kubenswrapper[4896]: I0126 17:02:55.021760 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerStarted","Data":"ead83a998ce7cdd450948dfe6304fcd8221cf47c4d82849f98c0ca421a376368"} Jan 26 17:02:55 crc kubenswrapper[4896]: I0126 17:02:55.082673 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:57 crc kubenswrapper[4896]: I0126 17:02:57.055316 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerStarted","Data":"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4"} Jan 26 17:02:57 crc kubenswrapper[4896]: I0126 17:02:57.343702 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:57 crc kubenswrapper[4896]: I0126 17:02:57.344007 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvzxd" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="registry-server" containerID="cri-o://093430ac57e2e9bd1235460bef12036f919b97d3cf5964de59ee42b95e26f072" gracePeriod=2 Jan 26 17:02:58 crc kubenswrapper[4896]: I0126 17:02:58.072833 4896 generic.go:334] "Generic (PLEG): container finished" podID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerID="093430ac57e2e9bd1235460bef12036f919b97d3cf5964de59ee42b95e26f072" exitCode=0 Jan 26 17:02:58 crc kubenswrapper[4896]: I0126 17:02:58.072868 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerDied","Data":"093430ac57e2e9bd1235460bef12036f919b97d3cf5964de59ee42b95e26f072"} Jan 26 17:02:58 crc kubenswrapper[4896]: I0126 17:02:58.085856 4896 generic.go:334] "Generic (PLEG): container finished" podID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerID="d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4" exitCode=0 Jan 26 17:02:58 crc kubenswrapper[4896]: I0126 17:02:58.085905 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerDied","Data":"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4"} Jan 26 17:02:58 crc kubenswrapper[4896]: I0126 17:02:58.919779 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.018252 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content\") pod \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.018542 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8wpn\" (UniqueName: \"kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn\") pod \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.018624 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities\") pod \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\" (UID: \"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c\") " Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.019804 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities" (OuterVolumeSpecName: "utilities") pod "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" (UID: "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.026181 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn" (OuterVolumeSpecName: "kube-api-access-s8wpn") pod "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" (UID: "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c"). InnerVolumeSpecName "kube-api-access-s8wpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.048177 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" (UID: "bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.111380 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvzxd" event={"ID":"bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c","Type":"ContainerDied","Data":"2ebe07a9ca379964292dc3b65ba26b5fdc949249af710fe38d8ceee5b1e4f0f3"} Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.111435 4896 scope.go:117] "RemoveContainer" containerID="093430ac57e2e9bd1235460bef12036f919b97d3cf5964de59ee42b95e26f072" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.111631 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvzxd" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.123423 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.123454 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8wpn\" (UniqueName: \"kubernetes.io/projected/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-kube-api-access-s8wpn\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.123464 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.155085 4896 scope.go:117] "RemoveContainer" containerID="5c23237fa17ecc1ab5b09d4c76ba8e7f3b445bc55ee71de43915c33b1cd33096" Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.173998 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.186530 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvzxd"] Jan 26 17:02:59 crc kubenswrapper[4896]: I0126 17:02:59.193429 4896 scope.go:117] "RemoveContainer" containerID="6922c51a8745ae2c92085b2241aff915da99a5eee1d22dbeb4487cf1c1587ab0" Jan 26 17:03:00 crc kubenswrapper[4896]: I0126 17:03:00.124277 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerStarted","Data":"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8"} Jan 26 17:03:00 crc kubenswrapper[4896]: I0126 17:03:00.152719 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kh2jb" podStartSLOduration=3.648869767 podStartE2EDuration="8.152699714s" podCreationTimestamp="2026-01-26 17:02:52 +0000 UTC" firstStartedPulling="2026-01-26 17:02:55.024339424 +0000 UTC m=+5332.806219817" lastFinishedPulling="2026-01-26 17:02:59.528169371 +0000 UTC m=+5337.310049764" observedRunningTime="2026-01-26 17:03:00.15008611 +0000 UTC m=+5337.931966523" watchObservedRunningTime="2026-01-26 17:03:00.152699714 +0000 UTC m=+5337.934580107" Jan 26 17:03:00 crc kubenswrapper[4896]: I0126 17:03:00.776541 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" path="/var/lib/kubelet/pods/bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c/volumes" Jan 26 17:03:03 crc kubenswrapper[4896]: I0126 17:03:03.285480 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:03 crc kubenswrapper[4896]: I0126 17:03:03.285837 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:03 crc kubenswrapper[4896]: I0126 17:03:03.343409 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:04 crc kubenswrapper[4896]: I0126 17:03:04.245400 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:04 crc kubenswrapper[4896]: I0126 17:03:04.747210 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.210450 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kh2jb" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="registry-server" containerID="cri-o://7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8" gracePeriod=2 Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.790611 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.937606 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content\") pod \"33d029ee-5906-420d-856a-41bf6ae6c65b\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.937855 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities\") pod \"33d029ee-5906-420d-856a-41bf6ae6c65b\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.937879 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7snhz\" (UniqueName: \"kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz\") pod \"33d029ee-5906-420d-856a-41bf6ae6c65b\" (UID: \"33d029ee-5906-420d-856a-41bf6ae6c65b\") " Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.940922 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities" (OuterVolumeSpecName: "utilities") pod "33d029ee-5906-420d-856a-41bf6ae6c65b" (UID: "33d029ee-5906-420d-856a-41bf6ae6c65b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:03:06 crc kubenswrapper[4896]: I0126 17:03:06.944819 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz" (OuterVolumeSpecName: "kube-api-access-7snhz") pod "33d029ee-5906-420d-856a-41bf6ae6c65b" (UID: "33d029ee-5906-420d-856a-41bf6ae6c65b"). InnerVolumeSpecName "kube-api-access-7snhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.010659 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33d029ee-5906-420d-856a-41bf6ae6c65b" (UID: "33d029ee-5906-420d-856a-41bf6ae6c65b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.041870 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.041908 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7snhz\" (UniqueName: \"kubernetes.io/projected/33d029ee-5906-420d-856a-41bf6ae6c65b-kube-api-access-7snhz\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.041919 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33d029ee-5906-420d-856a-41bf6ae6c65b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.227643 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerDied","Data":"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8"} Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.227701 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2jb" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.227731 4896 scope.go:117] "RemoveContainer" containerID="7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.227550 4896 generic.go:334] "Generic (PLEG): container finished" podID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerID="7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8" exitCode=0 Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.227791 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2jb" event={"ID":"33d029ee-5906-420d-856a-41bf6ae6c65b","Type":"ContainerDied","Data":"ead83a998ce7cdd450948dfe6304fcd8221cf47c4d82849f98c0ca421a376368"} Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.275349 4896 scope.go:117] "RemoveContainer" containerID="d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.278921 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.291252 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kh2jb"] Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.307931 4896 scope.go:117] "RemoveContainer" containerID="eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.358880 4896 scope.go:117] "RemoveContainer" containerID="7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8" Jan 26 17:03:07 crc kubenswrapper[4896]: E0126 17:03:07.359417 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8\": container with ID starting with 7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8 not found: ID does not exist" containerID="7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.359475 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8"} err="failed to get container status \"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8\": rpc error: code = NotFound desc = could not find container \"7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8\": container with ID starting with 7ec0b22cbc45ff556bb8b7057678bdc250dd1e9e1e322c95b88234b4dac4f5c8 not found: ID does not exist" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.359509 4896 scope.go:117] "RemoveContainer" containerID="d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4" Jan 26 17:03:07 crc kubenswrapper[4896]: E0126 17:03:07.360866 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4\": container with ID starting with d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4 not found: ID does not exist" containerID="d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.360894 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4"} err="failed to get container status \"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4\": rpc error: code = NotFound desc = could not find container \"d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4\": container with ID starting with d46a0f489921f4d1a6b510870718542ecdd83bc14280c97dddbd0a9165152ae4 not found: ID does not exist" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.360909 4896 scope.go:117] "RemoveContainer" containerID="eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894" Jan 26 17:03:07 crc kubenswrapper[4896]: E0126 17:03:07.361683 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894\": container with ID starting with eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894 not found: ID does not exist" containerID="eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894" Jan 26 17:03:07 crc kubenswrapper[4896]: I0126 17:03:07.361725 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894"} err="failed to get container status \"eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894\": rpc error: code = NotFound desc = could not find container \"eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894\": container with ID starting with eab9d3f98c003e6c415738417907b3e2fa188640b86bf7b9bac01e2511d22894 not found: ID does not exist" Jan 26 17:03:08 crc kubenswrapper[4896]: I0126 17:03:08.772658 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" path="/var/lib/kubelet/pods/33d029ee-5906-420d-856a-41bf6ae6c65b/volumes" Jan 26 17:03:13 crc kubenswrapper[4896]: I0126 17:03:13.685604 4896 trace.go:236] Trace[1329862358]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (26-Jan-2026 17:03:12.602) (total time: 1082ms): Jan 26 17:03:13 crc kubenswrapper[4896]: Trace[1329862358]: [1.082877402s] [1.082877402s] END Jan 26 17:03:18 crc kubenswrapper[4896]: I0126 17:03:18.813529 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:03:18 crc kubenswrapper[4896]: I0126 17:03:18.814134 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:03:18 crc kubenswrapper[4896]: I0126 17:03:18.814190 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:03:19 crc kubenswrapper[4896]: I0126 17:03:19.379090 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:03:19 crc kubenswrapper[4896]: I0126 17:03:19.379185 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" gracePeriod=600 Jan 26 17:03:19 crc kubenswrapper[4896]: E0126 17:03:19.514682 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:03:20 crc kubenswrapper[4896]: I0126 17:03:20.391384 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" exitCode=0 Jan 26 17:03:20 crc kubenswrapper[4896]: I0126 17:03:20.391467 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec"} Jan 26 17:03:20 crc kubenswrapper[4896]: I0126 17:03:20.393178 4896 scope.go:117] "RemoveContainer" containerID="665c0f6f746e30ca4091b8616feae17ae7842d66c0cfd9cb4db5be8886bce04f" Jan 26 17:03:20 crc kubenswrapper[4896]: I0126 17:03:20.394225 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:03:20 crc kubenswrapper[4896]: E0126 17:03:20.394676 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:03:31 crc kubenswrapper[4896]: I0126 17:03:31.764769 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:03:31 crc kubenswrapper[4896]: E0126 17:03:31.765923 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:03:44 crc kubenswrapper[4896]: I0126 17:03:44.759637 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:03:44 crc kubenswrapper[4896]: E0126 17:03:44.760796 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:03:56 crc kubenswrapper[4896]: I0126 17:03:56.761398 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:03:56 crc kubenswrapper[4896]: E0126 17:03:56.762466 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:04:11 crc kubenswrapper[4896]: I0126 17:04:11.759434 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:04:11 crc kubenswrapper[4896]: E0126 17:04:11.760241 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:04:24 crc kubenswrapper[4896]: I0126 17:04:24.759210 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:04:24 crc kubenswrapper[4896]: E0126 17:04:24.761100 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:04:36 crc kubenswrapper[4896]: I0126 17:04:36.760271 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:04:36 crc kubenswrapper[4896]: E0126 17:04:36.761076 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:04:47 crc kubenswrapper[4896]: I0126 17:04:47.759975 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:04:47 crc kubenswrapper[4896]: E0126 17:04:47.760797 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:05:02 crc kubenswrapper[4896]: I0126 17:05:02.780180 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:05:02 crc kubenswrapper[4896]: E0126 17:05:02.781465 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:05:17 crc kubenswrapper[4896]: I0126 17:05:17.759962 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:05:17 crc kubenswrapper[4896]: E0126 17:05:17.760908 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.086138 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087443 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087461 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087480 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087488 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087512 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="extract-content" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087521 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="extract-content" Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087548 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="extract-content" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087557 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="extract-content" Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087592 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="extract-utilities" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087601 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="extract-utilities" Jan 26 17:05:21 crc kubenswrapper[4896]: E0126 17:05:21.087613 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="extract-utilities" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087621 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="extract-utilities" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087923 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8e9b4e-1ca9-4910-aea9-b8a2c69fb79c" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.087939 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d029ee-5906-420d-856a-41bf6ae6c65b" containerName="registry-server" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.090140 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.099969 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.104647 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.104748 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.104819 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qhc\" (UniqueName: \"kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.207234 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qhc\" (UniqueName: \"kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.207819 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.207885 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.208330 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.208373 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.224719 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qhc\" (UniqueName: \"kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc\") pod \"certified-operators-dmwdt\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:21 crc kubenswrapper[4896]: I0126 17:05:21.429143 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:22 crc kubenswrapper[4896]: I0126 17:05:22.067862 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:22 crc kubenswrapper[4896]: I0126 17:05:22.927934 4896 generic.go:334] "Generic (PLEG): container finished" podID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerID="23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f" exitCode=0 Jan 26 17:05:22 crc kubenswrapper[4896]: I0126 17:05:22.928332 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerDied","Data":"23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f"} Jan 26 17:05:22 crc kubenswrapper[4896]: I0126 17:05:22.928360 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerStarted","Data":"d68f974a0c9649356eef6af07281cf4d26f593bce3eb16f03651117a32bad4a6"} Jan 26 17:05:25 crc kubenswrapper[4896]: I0126 17:05:25.145914 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerStarted","Data":"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5"} Jan 26 17:05:26 crc kubenswrapper[4896]: I0126 17:05:26.157999 4896 generic.go:334] "Generic (PLEG): container finished" podID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerID="2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5" exitCode=0 Jan 26 17:05:26 crc kubenswrapper[4896]: I0126 17:05:26.158106 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerDied","Data":"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5"} Jan 26 17:05:27 crc kubenswrapper[4896]: I0126 17:05:27.178431 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerStarted","Data":"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b"} Jan 26 17:05:27 crc kubenswrapper[4896]: I0126 17:05:27.204871 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dmwdt" podStartSLOduration=2.577915021 podStartE2EDuration="6.204848296s" podCreationTimestamp="2026-01-26 17:05:21 +0000 UTC" firstStartedPulling="2026-01-26 17:05:22.930612081 +0000 UTC m=+5480.712492474" lastFinishedPulling="2026-01-26 17:05:26.557545356 +0000 UTC m=+5484.339425749" observedRunningTime="2026-01-26 17:05:27.195447507 +0000 UTC m=+5484.977327920" watchObservedRunningTime="2026-01-26 17:05:27.204848296 +0000 UTC m=+5484.986728699" Jan 26 17:05:28 crc kubenswrapper[4896]: I0126 17:05:28.759319 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:05:28 crc kubenswrapper[4896]: E0126 17:05:28.759968 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:05:31 crc kubenswrapper[4896]: I0126 17:05:31.430489 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:31 crc kubenswrapper[4896]: I0126 17:05:31.431047 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:31 crc kubenswrapper[4896]: I0126 17:05:31.539540 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:32 crc kubenswrapper[4896]: I0126 17:05:32.538977 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:32 crc kubenswrapper[4896]: I0126 17:05:32.593566 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:34 crc kubenswrapper[4896]: I0126 17:05:34.268353 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dmwdt" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="registry-server" containerID="cri-o://fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b" gracePeriod=2 Jan 26 17:05:34 crc kubenswrapper[4896]: I0126 17:05:34.949559 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.061148 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content\") pod \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.061458 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities\") pod \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.061491 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2qhc\" (UniqueName: \"kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc\") pod \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\" (UID: \"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5\") " Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.062232 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities" (OuterVolumeSpecName: "utilities") pod "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" (UID: "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.062354 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.067943 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc" (OuterVolumeSpecName: "kube-api-access-x2qhc") pod "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" (UID: "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5"). InnerVolumeSpecName "kube-api-access-x2qhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.110620 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" (UID: "fa59069f-b54b-40ce-bdf9-1b268bd5e8f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.166553 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.166962 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2qhc\" (UniqueName: \"kubernetes.io/projected/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5-kube-api-access-x2qhc\") on node \"crc\" DevicePath \"\"" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.285524 4896 generic.go:334] "Generic (PLEG): container finished" podID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerID="fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b" exitCode=0 Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.285593 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerDied","Data":"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b"} Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.285625 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmwdt" event={"ID":"fa59069f-b54b-40ce-bdf9-1b268bd5e8f5","Type":"ContainerDied","Data":"d68f974a0c9649356eef6af07281cf4d26f593bce3eb16f03651117a32bad4a6"} Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.285646 4896 scope.go:117] "RemoveContainer" containerID="fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.285835 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmwdt" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.329273 4896 scope.go:117] "RemoveContainer" containerID="2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.333283 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.345113 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dmwdt"] Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.357323 4896 scope.go:117] "RemoveContainer" containerID="23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.410937 4896 scope.go:117] "RemoveContainer" containerID="fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b" Jan 26 17:05:35 crc kubenswrapper[4896]: E0126 17:05:35.417371 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b\": container with ID starting with fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b not found: ID does not exist" containerID="fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.417437 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b"} err="failed to get container status \"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b\": rpc error: code = NotFound desc = could not find container \"fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b\": container with ID starting with fa913f609b8ff1a8092efc711cd355baac6a91f42c5650e5d901cb86fd69b65b not found: ID does not exist" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.417467 4896 scope.go:117] "RemoveContainer" containerID="2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5" Jan 26 17:05:35 crc kubenswrapper[4896]: E0126 17:05:35.418173 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5\": container with ID starting with 2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5 not found: ID does not exist" containerID="2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.418214 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5"} err="failed to get container status \"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5\": rpc error: code = NotFound desc = could not find container \"2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5\": container with ID starting with 2bf5305dd185ec3682e0a3bf40575f4a5ba3cd4fc2565812a3414ac64a3a83c5 not found: ID does not exist" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.418243 4896 scope.go:117] "RemoveContainer" containerID="23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f" Jan 26 17:05:35 crc kubenswrapper[4896]: E0126 17:05:35.418644 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f\": container with ID starting with 23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f not found: ID does not exist" containerID="23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f" Jan 26 17:05:35 crc kubenswrapper[4896]: I0126 17:05:35.418686 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f"} err="failed to get container status \"23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f\": rpc error: code = NotFound desc = could not find container \"23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f\": container with ID starting with 23014a20248a30ee1ba5bd2132153161c8ad1a87296bb2280fe8da0eb24b8a3f not found: ID does not exist" Jan 26 17:05:36 crc kubenswrapper[4896]: I0126 17:05:36.780815 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" path="/var/lib/kubelet/pods/fa59069f-b54b-40ce-bdf9-1b268bd5e8f5/volumes" Jan 26 17:05:40 crc kubenswrapper[4896]: I0126 17:05:40.866328 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:05:40 crc kubenswrapper[4896]: E0126 17:05:40.867233 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:05:52 crc kubenswrapper[4896]: I0126 17:05:52.772725 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:05:52 crc kubenswrapper[4896]: E0126 17:05:52.773820 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:06:06 crc kubenswrapper[4896]: I0126 17:06:06.760931 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:06:06 crc kubenswrapper[4896]: E0126 17:06:06.762006 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:06:19 crc kubenswrapper[4896]: I0126 17:06:19.760434 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:06:19 crc kubenswrapper[4896]: E0126 17:06:19.761247 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:06:34 crc kubenswrapper[4896]: I0126 17:06:34.761071 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:06:34 crc kubenswrapper[4896]: E0126 17:06:34.762177 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:06:46 crc kubenswrapper[4896]: I0126 17:06:46.760677 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:06:46 crc kubenswrapper[4896]: E0126 17:06:46.761417 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:06:59 crc kubenswrapper[4896]: I0126 17:06:59.760358 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:06:59 crc kubenswrapper[4896]: E0126 17:06:59.761615 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:07:13 crc kubenswrapper[4896]: I0126 17:07:13.760355 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:07:13 crc kubenswrapper[4896]: E0126 17:07:13.761369 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:07:26 crc kubenswrapper[4896]: I0126 17:07:26.223976 4896 trace.go:236] Trace[742393475]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (26-Jan-2026 17:07:13.903) (total time: 12320ms): Jan 26 17:07:26 crc kubenswrapper[4896]: Trace[742393475]: [12.320253038s] [12.320253038s] END Jan 26 17:07:26 crc kubenswrapper[4896]: I0126 17:07:26.244951 4896 trace.go:236] Trace[1032903438]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (26-Jan-2026 17:07:10.347) (total time: 15897ms): Jan 26 17:07:26 crc kubenswrapper[4896]: Trace[1032903438]: [15.89746112s] [15.89746112s] END Jan 26 17:07:26 crc kubenswrapper[4896]: I0126 17:07:26.280064 4896 trace.go:236] Trace[824564268]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (26-Jan-2026 17:07:20.468) (total time: 5811ms): Jan 26 17:07:26 crc kubenswrapper[4896]: Trace[824564268]: [5.811850334s] [5.811850334s] END Jan 26 17:07:26 crc kubenswrapper[4896]: I0126 17:07:26.290058 4896 trace.go:236] Trace[1130649718]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (26-Jan-2026 17:07:19.740) (total time: 6549ms): Jan 26 17:07:26 crc kubenswrapper[4896]: Trace[1130649718]: [6.549497106s] [6.549497106s] END Jan 26 17:07:26 crc kubenswrapper[4896]: I0126 17:07:26.760008 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:07:26 crc kubenswrapper[4896]: E0126 17:07:26.760848 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:07:38 crc kubenswrapper[4896]: I0126 17:07:38.760635 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:07:38 crc kubenswrapper[4896]: E0126 17:07:38.761535 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:07:53 crc kubenswrapper[4896]: I0126 17:07:53.760061 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:07:53 crc kubenswrapper[4896]: E0126 17:07:53.761084 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:08:04 crc kubenswrapper[4896]: I0126 17:08:04.759788 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:08:04 crc kubenswrapper[4896]: E0126 17:08:04.760672 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:08:18 crc kubenswrapper[4896]: I0126 17:08:18.763073 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:08:18 crc kubenswrapper[4896]: E0126 17:08:18.764029 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:08:33 crc kubenswrapper[4896]: I0126 17:08:33.760074 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:08:34 crc kubenswrapper[4896]: I0126 17:08:34.271050 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8"} Jan 26 17:10:48 crc kubenswrapper[4896]: I0126 17:10:48.813198 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:10:48 crc kubenswrapper[4896]: I0126 17:10:48.813784 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:11:18 crc kubenswrapper[4896]: I0126 17:11:18.814140 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:11:18 crc kubenswrapper[4896]: I0126 17:11:18.814571 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:11:48 crc kubenswrapper[4896]: I0126 17:11:48.814208 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:11:48 crc kubenswrapper[4896]: I0126 17:11:48.814966 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:11:48 crc kubenswrapper[4896]: I0126 17:11:48.815074 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:11:48 crc kubenswrapper[4896]: I0126 17:11:48.816740 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:11:48 crc kubenswrapper[4896]: I0126 17:11:48.816854 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8" gracePeriod=600 Jan 26 17:11:49 crc kubenswrapper[4896]: I0126 17:11:49.767065 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8" exitCode=0 Jan 26 17:11:49 crc kubenswrapper[4896]: I0126 17:11:49.767134 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8"} Jan 26 17:11:49 crc kubenswrapper[4896]: I0126 17:11:49.767745 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f"} Jan 26 17:11:49 crc kubenswrapper[4896]: I0126 17:11:49.767778 4896 scope.go:117] "RemoveContainer" containerID="e9193c04619849a141026700b8425683c96854e94445a0c5c9b18d4fda8e95ec" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.634267 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7vn"] Jan 26 17:13:11 crc kubenswrapper[4896]: E0126 17:13:11.635867 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="extract-content" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.635891 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="extract-content" Jan 26 17:13:11 crc kubenswrapper[4896]: E0126 17:13:11.635939 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="extract-utilities" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.635951 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="extract-utilities" Jan 26 17:13:11 crc kubenswrapper[4896]: E0126 17:13:11.636009 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="registry-server" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.636021 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="registry-server" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.636477 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa59069f-b54b-40ce-bdf9-1b268bd5e8f5" containerName="registry-server" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.639891 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.672419 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7vn"] Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.764195 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-catalog-content\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.764313 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-utilities\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.764998 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7k72\" (UniqueName: \"kubernetes.io/projected/3a23765e-261d-49cc-a21f-5548e62c4b41-kube-api-access-k7k72\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.867315 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7k72\" (UniqueName: \"kubernetes.io/projected/3a23765e-261d-49cc-a21f-5548e62c4b41-kube-api-access-k7k72\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.867553 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-catalog-content\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.867596 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-utilities\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.868659 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-catalog-content\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.868814 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a23765e-261d-49cc-a21f-5548e62c4b41-utilities\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.890535 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7k72\" (UniqueName: \"kubernetes.io/projected/3a23765e-261d-49cc-a21f-5548e62c4b41-kube-api-access-k7k72\") pod \"redhat-marketplace-bj7vn\" (UID: \"3a23765e-261d-49cc-a21f-5548e62c4b41\") " pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:11 crc kubenswrapper[4896]: I0126 17:13:11.979879 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:12 crc kubenswrapper[4896]: I0126 17:13:12.521741 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7vn"] Jan 26 17:13:12 crc kubenswrapper[4896]: I0126 17:13:12.971247 4896 generic.go:334] "Generic (PLEG): container finished" podID="3a23765e-261d-49cc-a21f-5548e62c4b41" containerID="98101a14c346a743be80b851bb4ab3c57c2fe4080c02822959ba67da86f1a3fc" exitCode=0 Jan 26 17:13:12 crc kubenswrapper[4896]: I0126 17:13:12.971707 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7vn" event={"ID":"3a23765e-261d-49cc-a21f-5548e62c4b41","Type":"ContainerDied","Data":"98101a14c346a743be80b851bb4ab3c57c2fe4080c02822959ba67da86f1a3fc"} Jan 26 17:13:12 crc kubenswrapper[4896]: I0126 17:13:12.971761 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7vn" event={"ID":"3a23765e-261d-49cc-a21f-5548e62c4b41","Type":"ContainerStarted","Data":"21e29c8ce1e883e25adee9ca79372da8e4b27c8514fa8275654c3775b27c4931"} Jan 26 17:13:12 crc kubenswrapper[4896]: I0126 17:13:12.974729 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:13:19 crc kubenswrapper[4896]: I0126 17:13:19.060198 4896 generic.go:334] "Generic (PLEG): container finished" podID="3a23765e-261d-49cc-a21f-5548e62c4b41" containerID="85d5b5dc22724eff84e44bfc39e1b75ee9d196b1ee4f02b0a781d370b80b276d" exitCode=0 Jan 26 17:13:19 crc kubenswrapper[4896]: I0126 17:13:19.060255 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7vn" event={"ID":"3a23765e-261d-49cc-a21f-5548e62c4b41","Type":"ContainerDied","Data":"85d5b5dc22724eff84e44bfc39e1b75ee9d196b1ee4f02b0a781d370b80b276d"} Jan 26 17:13:20 crc kubenswrapper[4896]: I0126 17:13:20.080655 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj7vn" event={"ID":"3a23765e-261d-49cc-a21f-5548e62c4b41","Type":"ContainerStarted","Data":"e40ad40025b34ea43681ac8c3312793a68ac73fbf07fd6ba17d01651c9571950"} Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.123452 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.125994 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bj7vn" podStartSLOduration=3.486926646 podStartE2EDuration="10.125943008s" podCreationTimestamp="2026-01-26 17:13:11 +0000 UTC" firstStartedPulling="2026-01-26 17:13:12.97432803 +0000 UTC m=+5950.756208423" lastFinishedPulling="2026-01-26 17:13:19.613344372 +0000 UTC m=+5957.395224785" observedRunningTime="2026-01-26 17:13:21.115150694 +0000 UTC m=+5958.897031077" watchObservedRunningTime="2026-01-26 17:13:21.125943008 +0000 UTC m=+5958.907823401" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.126220 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.174153 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.257144 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqzr\" (UniqueName: \"kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.257620 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.257698 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.360791 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.361405 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.361347 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.361756 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.362132 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqzr\" (UniqueName: \"kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.386672 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqzr\" (UniqueName: \"kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr\") pod \"redhat-operators-54rwz\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.456801 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.981023 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:21 crc kubenswrapper[4896]: I0126 17:13:21.981947 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:22 crc kubenswrapper[4896]: I0126 17:13:22.056103 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:22 crc kubenswrapper[4896]: I0126 17:13:22.087298 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:23 crc kubenswrapper[4896]: I0126 17:13:23.133145 4896 generic.go:334] "Generic (PLEG): container finished" podID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerID="5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92" exitCode=0 Jan 26 17:13:23 crc kubenswrapper[4896]: I0126 17:13:23.133255 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerDied","Data":"5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92"} Jan 26 17:13:23 crc kubenswrapper[4896]: I0126 17:13:23.133813 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerStarted","Data":"0a7f66ef075e37bcf959d127c9e8f36f174e57edf1b0479a902bddd1389c46d3"} Jan 26 17:13:25 crc kubenswrapper[4896]: I0126 17:13:25.164035 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerStarted","Data":"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199"} Jan 26 17:13:28 crc kubenswrapper[4896]: I0126 17:13:28.205475 4896 generic.go:334] "Generic (PLEG): container finished" podID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerID="fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199" exitCode=0 Jan 26 17:13:28 crc kubenswrapper[4896]: I0126 17:13:28.205570 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerDied","Data":"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199"} Jan 26 17:13:29 crc kubenswrapper[4896]: I0126 17:13:29.219933 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerStarted","Data":"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91"} Jan 26 17:13:29 crc kubenswrapper[4896]: I0126 17:13:29.243790 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54rwz" podStartSLOduration=2.523810605 podStartE2EDuration="8.24375292s" podCreationTimestamp="2026-01-26 17:13:21 +0000 UTC" firstStartedPulling="2026-01-26 17:13:23.135083225 +0000 UTC m=+5960.916963618" lastFinishedPulling="2026-01-26 17:13:28.85502554 +0000 UTC m=+5966.636905933" observedRunningTime="2026-01-26 17:13:29.240625104 +0000 UTC m=+5967.022505507" watchObservedRunningTime="2026-01-26 17:13:29.24375292 +0000 UTC m=+5967.025633303" Jan 26 17:13:31 crc kubenswrapper[4896]: I0126 17:13:31.457412 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:31 crc kubenswrapper[4896]: I0126 17:13:31.457914 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:32 crc kubenswrapper[4896]: I0126 17:13:32.434464 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bj7vn" Jan 26 17:13:32 crc kubenswrapper[4896]: I0126 17:13:32.523474 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-54rwz" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="registry-server" probeResult="failure" output=< Jan 26 17:13:32 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:13:32 crc kubenswrapper[4896]: > Jan 26 17:13:32 crc kubenswrapper[4896]: I0126 17:13:32.546182 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj7vn"] Jan 26 17:13:32 crc kubenswrapper[4896]: I0126 17:13:32.606826 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 17:13:32 crc kubenswrapper[4896]: I0126 17:13:32.607160 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rw8kj" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="registry-server" containerID="cri-o://82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2" gracePeriod=2 Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.187611 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.298010 4896 generic.go:334] "Generic (PLEG): container finished" podID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerID="82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2" exitCode=0 Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.298388 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8kj" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.298368 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerDied","Data":"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2"} Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.298488 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8kj" event={"ID":"4bc33533-266c-4be5-8b8a-314312fbf12c","Type":"ContainerDied","Data":"91386ead0dc0ba565e3c2f0abdaa9c46133b86502d187f8fc8d33eab3d643ac7"} Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.298509 4896 scope.go:117] "RemoveContainer" containerID="82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.322666 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content\") pod \"4bc33533-266c-4be5-8b8a-314312fbf12c\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.322743 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z9q2\" (UniqueName: \"kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2\") pod \"4bc33533-266c-4be5-8b8a-314312fbf12c\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.322892 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities\") pod \"4bc33533-266c-4be5-8b8a-314312fbf12c\" (UID: \"4bc33533-266c-4be5-8b8a-314312fbf12c\") " Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.324173 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities" (OuterVolumeSpecName: "utilities") pod "4bc33533-266c-4be5-8b8a-314312fbf12c" (UID: "4bc33533-266c-4be5-8b8a-314312fbf12c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.345866 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2" (OuterVolumeSpecName: "kube-api-access-6z9q2") pod "4bc33533-266c-4be5-8b8a-314312fbf12c" (UID: "4bc33533-266c-4be5-8b8a-314312fbf12c"). InnerVolumeSpecName "kube-api-access-6z9q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.350770 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bc33533-266c-4be5-8b8a-314312fbf12c" (UID: "4bc33533-266c-4be5-8b8a-314312fbf12c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.357733 4896 scope.go:117] "RemoveContainer" containerID="ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.426075 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.426115 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z9q2\" (UniqueName: \"kubernetes.io/projected/4bc33533-266c-4be5-8b8a-314312fbf12c-kube-api-access-6z9q2\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.426127 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc33533-266c-4be5-8b8a-314312fbf12c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.440117 4896 scope.go:117] "RemoveContainer" containerID="768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.507486 4896 scope.go:117] "RemoveContainer" containerID="82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2" Jan 26 17:13:33 crc kubenswrapper[4896]: E0126 17:13:33.508179 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2\": container with ID starting with 82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2 not found: ID does not exist" containerID="82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.508235 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2"} err="failed to get container status \"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2\": rpc error: code = NotFound desc = could not find container \"82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2\": container with ID starting with 82625c5f4fc9915d63fa8fcca443d45d6e022e1e22c1aeaf60a3fd09f53a43c2 not found: ID does not exist" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.508266 4896 scope.go:117] "RemoveContainer" containerID="ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa" Jan 26 17:13:33 crc kubenswrapper[4896]: E0126 17:13:33.508823 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa\": container with ID starting with ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa not found: ID does not exist" containerID="ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.508869 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa"} err="failed to get container status \"ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa\": rpc error: code = NotFound desc = could not find container \"ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa\": container with ID starting with ea78ec9bb4d7c0292940e927abf49f566f1d5fe3829e07e712b6c7af9d6da4aa not found: ID does not exist" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.508906 4896 scope.go:117] "RemoveContainer" containerID="768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4" Jan 26 17:13:33 crc kubenswrapper[4896]: E0126 17:13:33.509326 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4\": container with ID starting with 768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4 not found: ID does not exist" containerID="768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.509391 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4"} err="failed to get container status \"768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4\": rpc error: code = NotFound desc = could not find container \"768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4\": container with ID starting with 768419189de2308019a1b1607d2b73d077f0b07b946f247b72c5166b101e2fc4 not found: ID does not exist" Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.645866 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 17:13:33 crc kubenswrapper[4896]: I0126 17:13:33.654286 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8kj"] Jan 26 17:13:34 crc kubenswrapper[4896]: I0126 17:13:34.824296 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" path="/var/lib/kubelet/pods/4bc33533-266c-4be5-8b8a-314312fbf12c/volumes" Jan 26 17:13:41 crc kubenswrapper[4896]: I0126 17:13:41.528807 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:41 crc kubenswrapper[4896]: I0126 17:13:41.590731 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:41 crc kubenswrapper[4896]: I0126 17:13:41.782521 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:43 crc kubenswrapper[4896]: I0126 17:13:43.411449 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54rwz" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="registry-server" containerID="cri-o://3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91" gracePeriod=2 Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.024254 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.148849 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tqzr\" (UniqueName: \"kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr\") pod \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.149248 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content\") pod \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.149445 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities\") pod \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\" (UID: \"1ca85b85-2af6-40e8-bf39-5a708d8ccad1\") " Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.151278 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities" (OuterVolumeSpecName: "utilities") pod "1ca85b85-2af6-40e8-bf39-5a708d8ccad1" (UID: "1ca85b85-2af6-40e8-bf39-5a708d8ccad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.162463 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr" (OuterVolumeSpecName: "kube-api-access-5tqzr") pod "1ca85b85-2af6-40e8-bf39-5a708d8ccad1" (UID: "1ca85b85-2af6-40e8-bf39-5a708d8ccad1"). InnerVolumeSpecName "kube-api-access-5tqzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.253305 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.253611 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tqzr\" (UniqueName: \"kubernetes.io/projected/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-kube-api-access-5tqzr\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.284666 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ca85b85-2af6-40e8-bf39-5a708d8ccad1" (UID: "1ca85b85-2af6-40e8-bf39-5a708d8ccad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.356533 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca85b85-2af6-40e8-bf39-5a708d8ccad1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.425186 4896 generic.go:334] "Generic (PLEG): container finished" podID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerID="3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91" exitCode=0 Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.425247 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerDied","Data":"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91"} Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.425282 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54rwz" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.425316 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54rwz" event={"ID":"1ca85b85-2af6-40e8-bf39-5a708d8ccad1","Type":"ContainerDied","Data":"0a7f66ef075e37bcf959d127c9e8f36f174e57edf1b0479a902bddd1389c46d3"} Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.425348 4896 scope.go:117] "RemoveContainer" containerID="3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.455374 4896 scope.go:117] "RemoveContainer" containerID="fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199" Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.476762 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.493306 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54rwz"] Jan 26 17:13:44 crc kubenswrapper[4896]: I0126 17:13:44.782990 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" path="/var/lib/kubelet/pods/1ca85b85-2af6-40e8-bf39-5a708d8ccad1/volumes" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.143168 4896 scope.go:117] "RemoveContainer" containerID="5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.189222 4896 scope.go:117] "RemoveContainer" containerID="3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91" Jan 26 17:13:45 crc kubenswrapper[4896]: E0126 17:13:45.189866 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91\": container with ID starting with 3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91 not found: ID does not exist" containerID="3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.189906 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91"} err="failed to get container status \"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91\": rpc error: code = NotFound desc = could not find container \"3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91\": container with ID starting with 3eea0b83d31396c416e60c4f112de0c94f6fa39dff0258b08cec1bfce498ed91 not found: ID does not exist" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.189935 4896 scope.go:117] "RemoveContainer" containerID="fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199" Jan 26 17:13:45 crc kubenswrapper[4896]: E0126 17:13:45.190274 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199\": container with ID starting with fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199 not found: ID does not exist" containerID="fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.190326 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199"} err="failed to get container status \"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199\": rpc error: code = NotFound desc = could not find container \"fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199\": container with ID starting with fb37b530d0d0c3a2075a7b09719729b174bd6e29102c040672c3408a220cc199 not found: ID does not exist" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.190366 4896 scope.go:117] "RemoveContainer" containerID="5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92" Jan 26 17:13:45 crc kubenswrapper[4896]: E0126 17:13:45.190669 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92\": container with ID starting with 5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92 not found: ID does not exist" containerID="5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92" Jan 26 17:13:45 crc kubenswrapper[4896]: I0126 17:13:45.190690 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92"} err="failed to get container status \"5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92\": rpc error: code = NotFound desc = could not find container \"5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92\": container with ID starting with 5c3b2069b65e39e29cffd1d4d7394c9a0333cc5146f4bef795abbc8529aebb92 not found: ID does not exist" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.264228 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265721 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="extract-content" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265740 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="extract-content" Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265762 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265770 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265808 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265815 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265829 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="extract-utilities" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265837 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="extract-utilities" Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265856 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="extract-utilities" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265864 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="extract-utilities" Jan 26 17:14:17 crc kubenswrapper[4896]: E0126 17:14:17.265878 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="extract-content" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.265886 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="extract-content" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.266367 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc33533-266c-4be5-8b8a-314312fbf12c" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.266424 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca85b85-2af6-40e8-bf39-5a708d8ccad1" containerName="registry-server" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.285069 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.287670 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.369695 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.369743 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.369789 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjn8d\" (UniqueName: \"kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.471504 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjn8d\" (UniqueName: \"kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.471900 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.471946 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.472366 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.472440 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.498532 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjn8d\" (UniqueName: \"kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d\") pod \"community-operators-q8tsx\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:17 crc kubenswrapper[4896]: I0126 17:14:17.627975 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:18 crc kubenswrapper[4896]: I0126 17:14:18.305946 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:18 crc kubenswrapper[4896]: I0126 17:14:18.813870 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:14:18 crc kubenswrapper[4896]: I0126 17:14:18.814511 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:14:19 crc kubenswrapper[4896]: I0126 17:14:19.144409 4896 generic.go:334] "Generic (PLEG): container finished" podID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerID="50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8" exitCode=0 Jan 26 17:14:19 crc kubenswrapper[4896]: I0126 17:14:19.144502 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerDied","Data":"50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8"} Jan 26 17:14:19 crc kubenswrapper[4896]: I0126 17:14:19.144608 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerStarted","Data":"f40719752703b31dca2937f7a83428b1d3c857fce83df0c62ba68af51a25a9ed"} Jan 26 17:14:21 crc kubenswrapper[4896]: I0126 17:14:21.174357 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerStarted","Data":"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510"} Jan 26 17:14:22 crc kubenswrapper[4896]: I0126 17:14:22.284919 4896 generic.go:334] "Generic (PLEG): container finished" podID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerID="1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510" exitCode=0 Jan 26 17:14:22 crc kubenswrapper[4896]: I0126 17:14:22.285443 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerDied","Data":"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510"} Jan 26 17:14:23 crc kubenswrapper[4896]: I0126 17:14:23.326268 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerStarted","Data":"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1"} Jan 26 17:14:23 crc kubenswrapper[4896]: I0126 17:14:23.354540 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8tsx" podStartSLOduration=2.74948509 podStartE2EDuration="6.354523227s" podCreationTimestamp="2026-01-26 17:14:17 +0000 UTC" firstStartedPulling="2026-01-26 17:14:19.147606757 +0000 UTC m=+6016.929487160" lastFinishedPulling="2026-01-26 17:14:22.752644904 +0000 UTC m=+6020.534525297" observedRunningTime="2026-01-26 17:14:23.351919783 +0000 UTC m=+6021.133800176" watchObservedRunningTime="2026-01-26 17:14:23.354523227 +0000 UTC m=+6021.136403620" Jan 26 17:14:27 crc kubenswrapper[4896]: I0126 17:14:27.628844 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:27 crc kubenswrapper[4896]: I0126 17:14:27.629551 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:27 crc kubenswrapper[4896]: I0126 17:14:27.701683 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:28 crc kubenswrapper[4896]: I0126 17:14:28.438116 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:28 crc kubenswrapper[4896]: I0126 17:14:28.515159 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:30 crc kubenswrapper[4896]: I0126 17:14:30.408457 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8tsx" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="registry-server" containerID="cri-o://7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1" gracePeriod=2 Jan 26 17:14:30 crc kubenswrapper[4896]: I0126 17:14:30.997277 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.121609 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjn8d\" (UniqueName: \"kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d\") pod \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.122152 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities\") pod \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.122239 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content\") pod \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\" (UID: \"deeec1f4-b28a-4083-8edb-3ad57e4e7466\") " Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.123148 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities" (OuterVolumeSpecName: "utilities") pod "deeec1f4-b28a-4083-8edb-3ad57e4e7466" (UID: "deeec1f4-b28a-4083-8edb-3ad57e4e7466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.124080 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.132922 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d" (OuterVolumeSpecName: "kube-api-access-rjn8d") pod "deeec1f4-b28a-4083-8edb-3ad57e4e7466" (UID: "deeec1f4-b28a-4083-8edb-3ad57e4e7466"). InnerVolumeSpecName "kube-api-access-rjn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.175873 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deeec1f4-b28a-4083-8edb-3ad57e4e7466" (UID: "deeec1f4-b28a-4083-8edb-3ad57e4e7466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.226429 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjn8d\" (UniqueName: \"kubernetes.io/projected/deeec1f4-b28a-4083-8edb-3ad57e4e7466-kube-api-access-rjn8d\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.226471 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deeec1f4-b28a-4083-8edb-3ad57e4e7466-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.423547 4896 generic.go:334] "Generic (PLEG): container finished" podID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerID="7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1" exitCode=0 Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.423625 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerDied","Data":"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1"} Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.423664 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8tsx" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.423688 4896 scope.go:117] "RemoveContainer" containerID="7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.423675 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8tsx" event={"ID":"deeec1f4-b28a-4083-8edb-3ad57e4e7466","Type":"ContainerDied","Data":"f40719752703b31dca2937f7a83428b1d3c857fce83df0c62ba68af51a25a9ed"} Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.449798 4896 scope.go:117] "RemoveContainer" containerID="1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.474011 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.487270 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8tsx"] Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.508272 4896 scope.go:117] "RemoveContainer" containerID="50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.541389 4896 scope.go:117] "RemoveContainer" containerID="7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1" Jan 26 17:14:31 crc kubenswrapper[4896]: E0126 17:14:31.543415 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1\": container with ID starting with 7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1 not found: ID does not exist" containerID="7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.543622 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1"} err="failed to get container status \"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1\": rpc error: code = NotFound desc = could not find container \"7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1\": container with ID starting with 7149e6c94ad3c0f894273cf1b167c9a1b2e0126d67d265d9c2d84d7513e9dbc1 not found: ID does not exist" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.543740 4896 scope.go:117] "RemoveContainer" containerID="1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510" Jan 26 17:14:31 crc kubenswrapper[4896]: E0126 17:14:31.544374 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510\": container with ID starting with 1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510 not found: ID does not exist" containerID="1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.544430 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510"} err="failed to get container status \"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510\": rpc error: code = NotFound desc = could not find container \"1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510\": container with ID starting with 1633126bd41797324521e863158ebc9f6452d8fd58f3cd37de9e90bd84e35510 not found: ID does not exist" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.544472 4896 scope.go:117] "RemoveContainer" containerID="50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8" Jan 26 17:14:31 crc kubenswrapper[4896]: E0126 17:14:31.544917 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8\": container with ID starting with 50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8 not found: ID does not exist" containerID="50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8" Jan 26 17:14:31 crc kubenswrapper[4896]: I0126 17:14:31.544951 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8"} err="failed to get container status \"50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8\": rpc error: code = NotFound desc = could not find container \"50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8\": container with ID starting with 50b379fa3dffcaf08ef4bbbd34367ad81552080d5202b91fbd35cc2881ab91b8 not found: ID does not exist" Jan 26 17:14:32 crc kubenswrapper[4896]: I0126 17:14:32.772846 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" path="/var/lib/kubelet/pods/deeec1f4-b28a-4083-8edb-3ad57e4e7466/volumes" Jan 26 17:14:48 crc kubenswrapper[4896]: I0126 17:14:48.850601 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:14:48 crc kubenswrapper[4896]: I0126 17:14:48.851317 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.165776 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564"] Jan 26 17:15:00 crc kubenswrapper[4896]: E0126 17:15:00.166724 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.166740 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4896]: E0126 17:15:00.166787 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.166795 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4896]: E0126 17:15:00.166813 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.166819 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.167056 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="deeec1f4-b28a-4083-8edb-3ad57e4e7466" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.168065 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.171936 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.177874 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.180512 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564"] Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.304427 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.304974 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwks6\" (UniqueName: \"kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.305131 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.407515 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.407613 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwks6\" (UniqueName: \"kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.407679 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.408512 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.422470 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.426330 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwks6\" (UniqueName: \"kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6\") pod \"collect-profiles-29490795-b2564\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:00 crc kubenswrapper[4896]: I0126 17:15:00.494413 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:01 crc kubenswrapper[4896]: I0126 17:15:01.302819 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564"] Jan 26 17:15:02 crc kubenswrapper[4896]: I0126 17:15:02.271711 4896 generic.go:334] "Generic (PLEG): container finished" podID="a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" containerID="4e489435729605d4006125ddb3b9f3e7126edf04153230f04a6691c34c87a675" exitCode=0 Jan 26 17:15:02 crc kubenswrapper[4896]: I0126 17:15:02.271823 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" event={"ID":"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8","Type":"ContainerDied","Data":"4e489435729605d4006125ddb3b9f3e7126edf04153230f04a6691c34c87a675"} Jan 26 17:15:02 crc kubenswrapper[4896]: I0126 17:15:02.272226 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" event={"ID":"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8","Type":"ContainerStarted","Data":"ceddb213519994ba721a26c2bbf53aff335664fbde53a3aedc041a876d8fd52b"} Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.786819 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.940826 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwks6\" (UniqueName: \"kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6\") pod \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.940931 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume\") pod \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.941323 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume\") pod \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\" (UID: \"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8\") " Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.943090 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" (UID: "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.949946 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" (UID: "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4896]: I0126 17:15:03.972936 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6" (OuterVolumeSpecName: "kube-api-access-rwks6") pod "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" (UID: "a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8"). InnerVolumeSpecName "kube-api-access-rwks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.044349 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.044385 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwks6\" (UniqueName: \"kubernetes.io/projected/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-kube-api-access-rwks6\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.044395 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.338448 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" event={"ID":"a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8","Type":"ContainerDied","Data":"ceddb213519994ba721a26c2bbf53aff335664fbde53a3aedc041a876d8fd52b"} Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.338496 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-b2564" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.338509 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceddb213519994ba721a26c2bbf53aff335664fbde53a3aedc041a876d8fd52b" Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.880094 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh"] Jan 26 17:15:04 crc kubenswrapper[4896]: I0126 17:15:04.894364 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-bg7lh"] Jan 26 17:15:06 crc kubenswrapper[4896]: I0126 17:15:06.779851 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b634c1-bd6a-40fa-96d6-8a92a521b18e" path="/var/lib/kubelet/pods/c8b634c1-bd6a-40fa-96d6-8a92a521b18e/volumes" Jan 26 17:15:18 crc kubenswrapper[4896]: I0126 17:15:18.813947 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:15:18 crc kubenswrapper[4896]: I0126 17:15:18.814537 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:15:18 crc kubenswrapper[4896]: I0126 17:15:18.814614 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:15:18 crc kubenswrapper[4896]: I0126 17:15:18.815728 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:15:18 crc kubenswrapper[4896]: I0126 17:15:18.815784 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" gracePeriod=600 Jan 26 17:15:18 crc kubenswrapper[4896]: E0126 17:15:18.959533 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:15:19 crc kubenswrapper[4896]: I0126 17:15:19.618776 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" exitCode=0 Jan 26 17:15:19 crc kubenswrapper[4896]: I0126 17:15:19.618840 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f"} Jan 26 17:15:19 crc kubenswrapper[4896]: I0126 17:15:19.618907 4896 scope.go:117] "RemoveContainer" containerID="ca9bd86fb9571d769b2484a91a4041bce17b389aaaedca46bd94dc71689604f8" Jan 26 17:15:19 crc kubenswrapper[4896]: I0126 17:15:19.619989 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:15:19 crc kubenswrapper[4896]: E0126 17:15:19.620471 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.010026 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:25 crc kubenswrapper[4896]: E0126 17:15:25.011023 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" containerName="collect-profiles" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.011038 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" containerName="collect-profiles" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.011277 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34fe57f-2492-4dc6-b5b7-34a0bfcd95f8" containerName="collect-profiles" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.015443 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.036271 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.148587 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.148736 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hwl\" (UniqueName: \"kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.148806 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.251082 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hwl\" (UniqueName: \"kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.251206 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.251405 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.251967 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.252513 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.273918 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hwl\" (UniqueName: \"kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl\") pod \"certified-operators-rb9mx\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:25 crc kubenswrapper[4896]: I0126 17:15:25.346287 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:26 crc kubenswrapper[4896]: I0126 17:15:26.182644 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:26 crc kubenswrapper[4896]: I0126 17:15:26.925380 4896 generic.go:334] "Generic (PLEG): container finished" podID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerID="2af0358891982369b3ec18b256d7ef535cb72ae06c82b9f5c3a3b381d9149bde" exitCode=0 Jan 26 17:15:26 crc kubenswrapper[4896]: I0126 17:15:26.925554 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerDied","Data":"2af0358891982369b3ec18b256d7ef535cb72ae06c82b9f5c3a3b381d9149bde"} Jan 26 17:15:26 crc kubenswrapper[4896]: I0126 17:15:26.926244 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerStarted","Data":"b4315497893ff9cb12bbe7a8903abc730dffa3bc9ea9a5f56d17b948a0617420"} Jan 26 17:15:29 crc kubenswrapper[4896]: I0126 17:15:29.047603 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerStarted","Data":"eb2e1b0584bc8e4898c870607897dccd92bede382bb851f80e4b86fc7afd88c0"} Jan 26 17:15:29 crc kubenswrapper[4896]: I0126 17:15:29.233793 4896 scope.go:117] "RemoveContainer" containerID="ce38006309df248e42ef3840084ca5e951ed698b8acad43ed1e86dce5139c515" Jan 26 17:15:30 crc kubenswrapper[4896]: I0126 17:15:30.091594 4896 generic.go:334] "Generic (PLEG): container finished" podID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerID="eb2e1b0584bc8e4898c870607897dccd92bede382bb851f80e4b86fc7afd88c0" exitCode=0 Jan 26 17:15:30 crc kubenswrapper[4896]: I0126 17:15:30.091921 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerDied","Data":"eb2e1b0584bc8e4898c870607897dccd92bede382bb851f80e4b86fc7afd88c0"} Jan 26 17:15:30 crc kubenswrapper[4896]: E0126 17:15:30.274935 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc04d9f5e_f9ad_4ed6_b12d_ceb7027a39b8.slice/crio-eb2e1b0584bc8e4898c870607897dccd92bede382bb851f80e4b86fc7afd88c0.scope\": RecentStats: unable to find data in memory cache]" Jan 26 17:15:30 crc kubenswrapper[4896]: I0126 17:15:30.760762 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:15:30 crc kubenswrapper[4896]: E0126 17:15:30.761333 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:15:31 crc kubenswrapper[4896]: I0126 17:15:31.105223 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerStarted","Data":"4b77fe44e380f3b2ffd9daf732122f3ab30828a83953e59a1b9f77fd04511ba0"} Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.346682 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.347273 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.408855 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.441082 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rb9mx" podStartSLOduration=7.813558034 podStartE2EDuration="11.441043638s" podCreationTimestamp="2026-01-26 17:15:24 +0000 UTC" firstStartedPulling="2026-01-26 17:15:26.928138791 +0000 UTC m=+6084.710019184" lastFinishedPulling="2026-01-26 17:15:30.555624395 +0000 UTC m=+6088.337504788" observedRunningTime="2026-01-26 17:15:31.13955044 +0000 UTC m=+6088.921430833" watchObservedRunningTime="2026-01-26 17:15:35.441043638 +0000 UTC m=+6093.222924041" Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.654326 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:35 crc kubenswrapper[4896]: I0126 17:15:35.744697 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:37 crc kubenswrapper[4896]: I0126 17:15:37.623384 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rb9mx" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="registry-server" containerID="cri-o://4b77fe44e380f3b2ffd9daf732122f3ab30828a83953e59a1b9f77fd04511ba0" gracePeriod=2 Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.639143 4896 generic.go:334] "Generic (PLEG): container finished" podID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerID="4b77fe44e380f3b2ffd9daf732122f3ab30828a83953e59a1b9f77fd04511ba0" exitCode=0 Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.639220 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerDied","Data":"4b77fe44e380f3b2ffd9daf732122f3ab30828a83953e59a1b9f77fd04511ba0"} Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.862241 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.900194 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities\") pod \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.900361 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78hwl\" (UniqueName: \"kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl\") pod \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.900414 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content\") pod \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\" (UID: \"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8\") " Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.907718 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl" (OuterVolumeSpecName: "kube-api-access-78hwl") pod "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" (UID: "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8"). InnerVolumeSpecName "kube-api-access-78hwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.911799 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities" (OuterVolumeSpecName: "utilities") pod "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" (UID: "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:38 crc kubenswrapper[4896]: I0126 17:15:38.967992 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" (UID: "c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.003806 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.003842 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.003854 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78hwl\" (UniqueName: \"kubernetes.io/projected/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8-kube-api-access-78hwl\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.655283 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rb9mx" event={"ID":"c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8","Type":"ContainerDied","Data":"b4315497893ff9cb12bbe7a8903abc730dffa3bc9ea9a5f56d17b948a0617420"} Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.655356 4896 scope.go:117] "RemoveContainer" containerID="4b77fe44e380f3b2ffd9daf732122f3ab30828a83953e59a1b9f77fd04511ba0" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.655434 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rb9mx" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.694827 4896 scope.go:117] "RemoveContainer" containerID="eb2e1b0584bc8e4898c870607897dccd92bede382bb851f80e4b86fc7afd88c0" Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.700775 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.717359 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rb9mx"] Jan 26 17:15:39 crc kubenswrapper[4896]: I0126 17:15:39.732226 4896 scope.go:117] "RemoveContainer" containerID="2af0358891982369b3ec18b256d7ef535cb72ae06c82b9f5c3a3b381d9149bde" Jan 26 17:15:40 crc kubenswrapper[4896]: I0126 17:15:40.772739 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" path="/var/lib/kubelet/pods/c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8/volumes" Jan 26 17:15:42 crc kubenswrapper[4896]: I0126 17:15:42.790600 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:15:42 crc kubenswrapper[4896]: E0126 17:15:42.791768 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:15:57 crc kubenswrapper[4896]: I0126 17:15:57.760013 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:15:57 crc kubenswrapper[4896]: E0126 17:15:57.760699 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:16:02 crc kubenswrapper[4896]: E0126 17:16:02.997743 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:11 crc kubenswrapper[4896]: I0126 17:16:11.760087 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:16:11 crc kubenswrapper[4896]: E0126 17:16:11.762573 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:16:13 crc kubenswrapper[4896]: E0126 17:16:13.339330 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:14 crc kubenswrapper[4896]: E0126 17:16:14.795284 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:23 crc kubenswrapper[4896]: E0126 17:16:23.665261 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:24 crc kubenswrapper[4896]: I0126 17:16:24.759522 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:16:24 crc kubenswrapper[4896]: E0126 17:16:24.760596 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:16:30 crc kubenswrapper[4896]: E0126 17:16:30.059159 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:33 crc kubenswrapper[4896]: E0126 17:16:33.716724 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:35 crc kubenswrapper[4896]: I0126 17:16:35.759482 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:16:35 crc kubenswrapper[4896]: E0126 17:16:35.760449 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:16:44 crc kubenswrapper[4896]: E0126 17:16:44.051933 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:44 crc kubenswrapper[4896]: E0126 17:16:44.794918 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:48 crc kubenswrapper[4896]: E0126 17:16:48.105206 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:48 crc kubenswrapper[4896]: E0126 17:16:48.105661 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:16:50 crc kubenswrapper[4896]: I0126 17:16:50.759635 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:16:50 crc kubenswrapper[4896]: E0126 17:16:50.760290 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:16:54 crc kubenswrapper[4896]: E0126 17:16:54.372610 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:17:00 crc kubenswrapper[4896]: E0126 17:17:00.052589 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:17:04 crc kubenswrapper[4896]: I0126 17:17:04.769197 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:17:04 crc kubenswrapper[4896]: E0126 17:17:04.791731 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:17:18 crc kubenswrapper[4896]: I0126 17:17:18.761264 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:17:18 crc kubenswrapper[4896]: E0126 17:17:18.763079 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:17:30 crc kubenswrapper[4896]: I0126 17:17:30.772096 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:17:30 crc kubenswrapper[4896]: E0126 17:17:30.777487 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:17:43 crc kubenswrapper[4896]: I0126 17:17:43.759281 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:17:43 crc kubenswrapper[4896]: E0126 17:17:43.760250 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:17:54 crc kubenswrapper[4896]: I0126 17:17:54.765568 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:17:54 crc kubenswrapper[4896]: E0126 17:17:54.766327 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:18:07 crc kubenswrapper[4896]: I0126 17:18:07.760469 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:18:07 crc kubenswrapper[4896]: E0126 17:18:07.761830 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:18:21 crc kubenswrapper[4896]: I0126 17:18:21.759457 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:18:21 crc kubenswrapper[4896]: E0126 17:18:21.760381 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:18:36 crc kubenswrapper[4896]: I0126 17:18:36.763796 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:18:36 crc kubenswrapper[4896]: E0126 17:18:36.764784 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:18:48 crc kubenswrapper[4896]: I0126 17:18:48.760063 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:18:48 crc kubenswrapper[4896]: E0126 17:18:48.761246 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:18:59 crc kubenswrapper[4896]: I0126 17:18:59.760852 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:18:59 crc kubenswrapper[4896]: E0126 17:18:59.761906 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:19:10 crc kubenswrapper[4896]: I0126 17:19:10.762901 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:19:10 crc kubenswrapper[4896]: E0126 17:19:10.763986 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.341862 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 17:19:15 crc kubenswrapper[4896]: E0126 17:19:15.342872 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="registry-server" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.342886 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="registry-server" Jan 26 17:19:15 crc kubenswrapper[4896]: E0126 17:19:15.342904 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="extract-utilities" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.342910 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="extract-utilities" Jan 26 17:19:15 crc kubenswrapper[4896]: E0126 17:19:15.342930 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="extract-content" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.342935 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="extract-content" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.343168 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04d9f5e-f9ad-4ed6-b12d-ceb7027a39b8" containerName="registry-server" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.344010 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.347755 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.347933 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.348107 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fcbdm" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.348269 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.362181 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.362299 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.362475 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.365672 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465346 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465567 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465607 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465644 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfwn\" (UniqueName: \"kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465692 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465709 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465824 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465885 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.465990 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.467366 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.467466 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.471162 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.567966 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568059 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568129 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568146 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568174 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfwn\" (UniqueName: \"kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568212 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.568811 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.569076 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.573300 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.573674 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.579681 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.591455 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfwn\" (UniqueName: \"kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.646484 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " pod="openstack/tempest-tests-tempest" Jan 26 17:19:15 crc kubenswrapper[4896]: I0126 17:19:15.673645 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 17:19:16 crc kubenswrapper[4896]: I0126 17:19:16.248712 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:19:16 crc kubenswrapper[4896]: I0126 17:19:16.249004 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 17:19:16 crc kubenswrapper[4896]: I0126 17:19:16.755682 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75e8efe4-ddea-48ed-b018-c952f346b635","Type":"ContainerStarted","Data":"606a1bb56f33476625882d52b46dacf144e2df3a852d88af49feafb0221c8cdf"} Jan 26 17:19:23 crc kubenswrapper[4896]: I0126 17:19:23.760349 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:19:23 crc kubenswrapper[4896]: E0126 17:19:23.761243 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:19:34 crc kubenswrapper[4896]: I0126 17:19:34.787790 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:19:34 crc kubenswrapper[4896]: E0126 17:19:34.791362 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:19:46 crc kubenswrapper[4896]: I0126 17:19:46.961274 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:19:46 crc kubenswrapper[4896]: I0126 17:19:46.961760 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:19:46 crc kubenswrapper[4896]: I0126 17:19:46.962032 4896 patch_prober.go:28] interesting pod/router-default-5444994796-ms78m container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:19:46 crc kubenswrapper[4896]: I0126 17:19:46.962090 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ms78m" podUID="5aecf14a-cf97-41d8-b037-58f39a0a19bf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:19:47 crc kubenswrapper[4896]: I0126 17:19:47.760390 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:19:47 crc kubenswrapper[4896]: E0126 17:19:47.760960 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:20:02 crc kubenswrapper[4896]: I0126 17:20:02.769823 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:20:02 crc kubenswrapper[4896]: E0126 17:20:02.770621 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:20:06 crc kubenswrapper[4896]: E0126 17:20:06.923164 4896 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 17:20:06 crc kubenswrapper[4896]: E0126 17:20:06.925049 4896 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cfwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(75e8efe4-ddea-48ed-b018-c952f346b635): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 17:20:06 crc kubenswrapper[4896]: E0126 17:20:06.926344 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="75e8efe4-ddea-48ed-b018-c952f346b635" Jan 26 17:20:07 crc kubenswrapper[4896]: E0126 17:20:07.561507 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="75e8efe4-ddea-48ed-b018-c952f346b635" Jan 26 17:20:13 crc kubenswrapper[4896]: I0126 17:20:13.773825 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:20:13 crc kubenswrapper[4896]: E0126 17:20:13.787784 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:20:21 crc kubenswrapper[4896]: I0126 17:20:21.447325 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 17:20:25 crc kubenswrapper[4896]: I0126 17:20:25.760213 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:20:26 crc kubenswrapper[4896]: I0126 17:20:26.000516 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75e8efe4-ddea-48ed-b018-c952f346b635","Type":"ContainerStarted","Data":"4b0ba393ceb3f4845476bd72ef3914428acad5903483a33d49532d3401315baf"} Jan 26 17:20:26 crc kubenswrapper[4896]: I0126 17:20:26.028384 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.832785831 podStartE2EDuration="1m12.028347677s" podCreationTimestamp="2026-01-26 17:19:14 +0000 UTC" firstStartedPulling="2026-01-26 17:19:16.248436478 +0000 UTC m=+6314.030316871" lastFinishedPulling="2026-01-26 17:20:21.443998304 +0000 UTC m=+6379.225878717" observedRunningTime="2026-01-26 17:20:26.020053044 +0000 UTC m=+6383.801933437" watchObservedRunningTime="2026-01-26 17:20:26.028347677 +0000 UTC m=+6383.810228070" Jan 26 17:20:27 crc kubenswrapper[4896]: I0126 17:20:27.018228 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe"} Jan 26 17:22:48 crc kubenswrapper[4896]: I0126 17:22:48.815290 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:22:48 crc kubenswrapper[4896]: I0126 17:22:48.819272 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:23:18 crc kubenswrapper[4896]: I0126 17:23:18.815356 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:23:18 crc kubenswrapper[4896]: I0126 17:23:18.816061 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.654843 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.665335 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.822917 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.823018 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.823053 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ksz4\" (UniqueName: \"kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.925436 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ksz4\" (UniqueName: \"kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.926317 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.926416 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.930551 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:23 crc kubenswrapper[4896]: I0126 17:23:23.930867 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:24 crc kubenswrapper[4896]: I0126 17:23:24.041824 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ksz4\" (UniqueName: \"kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4\") pod \"redhat-marketplace-w7x5g\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:24 crc kubenswrapper[4896]: I0126 17:23:24.046249 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:24 crc kubenswrapper[4896]: I0126 17:23:24.300236 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:25 crc kubenswrapper[4896]: I0126 17:23:25.718883 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:25 crc kubenswrapper[4896]: I0126 17:23:25.868155 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerStarted","Data":"20fedd4ffe59ab6f9976fa151dcc99c713c5b3b8e9ef12c75fefd75ee3cc1b43"} Jan 26 17:23:26 crc kubenswrapper[4896]: I0126 17:23:26.884122 4896 generic.go:334] "Generic (PLEG): container finished" podID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerID="89e75e369b5e0e0f89539c90f8d600c6991d12a4b03eb941c96a5ca57549dc87" exitCode=0 Jan 26 17:23:26 crc kubenswrapper[4896]: I0126 17:23:26.884197 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerDied","Data":"89e75e369b5e0e0f89539c90f8d600c6991d12a4b03eb941c96a5ca57549dc87"} Jan 26 17:23:28 crc kubenswrapper[4896]: I0126 17:23:28.937776 4896 generic.go:334] "Generic (PLEG): container finished" podID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerID="c6d2f8518a45a721942b7fb6a0857c98d74f5a2ef350371e4c1c6858f6572d41" exitCode=0 Jan 26 17:23:28 crc kubenswrapper[4896]: I0126 17:23:28.937868 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerDied","Data":"c6d2f8518a45a721942b7fb6a0857c98d74f5a2ef350371e4c1c6858f6572d41"} Jan 26 17:23:29 crc kubenswrapper[4896]: I0126 17:23:29.952232 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerStarted","Data":"69898ca66aaf9180313592b0746a8cd893e46ed3f4cb8dc2a5589829b0a5e9e1"} Jan 26 17:23:30 crc kubenswrapper[4896]: I0126 17:23:30.006301 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7x5g" podStartSLOduration=5.522697036 podStartE2EDuration="7.986742778s" podCreationTimestamp="2026-01-26 17:23:22 +0000 UTC" firstStartedPulling="2026-01-26 17:23:26.887546025 +0000 UTC m=+6564.669426418" lastFinishedPulling="2026-01-26 17:23:29.351591767 +0000 UTC m=+6567.133472160" observedRunningTime="2026-01-26 17:23:29.974711073 +0000 UTC m=+6567.756591466" watchObservedRunningTime="2026-01-26 17:23:29.986742778 +0000 UTC m=+6567.768623181" Jan 26 17:23:34 crc kubenswrapper[4896]: I0126 17:23:34.300860 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:34 crc kubenswrapper[4896]: I0126 17:23:34.301444 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:35 crc kubenswrapper[4896]: I0126 17:23:35.372502 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-w7x5g" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="registry-server" probeResult="failure" output=< Jan 26 17:23:35 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:23:35 crc kubenswrapper[4896]: > Jan 26 17:23:41 crc kubenswrapper[4896]: I0126 17:23:41.774992 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="7a3e4fe3-b61e-4200-acf9-9ba170d68402" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:23:41 crc kubenswrapper[4896]: I0126 17:23:41.778459 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="7a3e4fe3-b61e-4200-acf9-9ba170d68402" containerName="galera" probeResult="failure" output="command timed out" Jan 26 17:23:44 crc kubenswrapper[4896]: I0126 17:23:44.371882 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:44 crc kubenswrapper[4896]: I0126 17:23:44.480392 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:44 crc kubenswrapper[4896]: I0126 17:23:44.716457 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:46 crc kubenswrapper[4896]: I0126 17:23:46.137005 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7x5g" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="registry-server" containerID="cri-o://69898ca66aaf9180313592b0746a8cd893e46ed3f4cb8dc2a5589829b0a5e9e1" gracePeriod=2 Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.158879 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerDied","Data":"69898ca66aaf9180313592b0746a8cd893e46ed3f4cb8dc2a5589829b0a5e9e1"} Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.158933 4896 generic.go:334] "Generic (PLEG): container finished" podID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerID="69898ca66aaf9180313592b0746a8cd893e46ed3f4cb8dc2a5589829b0a5e9e1" exitCode=0 Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.159563 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7x5g" event={"ID":"4a795e42-3c2f-4bab-ba8a-310c32ddc785","Type":"ContainerDied","Data":"20fedd4ffe59ab6f9976fa151dcc99c713c5b3b8e9ef12c75fefd75ee3cc1b43"} Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.160496 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fedd4ffe59ab6f9976fa151dcc99c713c5b3b8e9ef12c75fefd75ee3cc1b43" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.268769 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.350553 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ksz4\" (UniqueName: \"kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4\") pod \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.351049 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content\") pod \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.351305 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities\") pod \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\" (UID: \"4a795e42-3c2f-4bab-ba8a-310c32ddc785\") " Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.352952 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities" (OuterVolumeSpecName: "utilities") pod "4a795e42-3c2f-4bab-ba8a-310c32ddc785" (UID: "4a795e42-3c2f-4bab-ba8a-310c32ddc785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.364932 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4" (OuterVolumeSpecName: "kube-api-access-6ksz4") pod "4a795e42-3c2f-4bab-ba8a-310c32ddc785" (UID: "4a795e42-3c2f-4bab-ba8a-310c32ddc785"). InnerVolumeSpecName "kube-api-access-6ksz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.399319 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a795e42-3c2f-4bab-ba8a-310c32ddc785" (UID: "4a795e42-3c2f-4bab-ba8a-310c32ddc785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.456686 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ksz4\" (UniqueName: \"kubernetes.io/projected/4a795e42-3c2f-4bab-ba8a-310c32ddc785-kube-api-access-6ksz4\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.457058 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:47 crc kubenswrapper[4896]: I0126 17:23:47.457072 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a795e42-3c2f-4bab-ba8a-310c32ddc785-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.171005 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7x5g" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.210267 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.226191 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7x5g"] Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.777664 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" path="/var/lib/kubelet/pods/4a795e42-3c2f-4bab-ba8a-310c32ddc785/volumes" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.813496 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.813756 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.813810 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.815159 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:23:48 crc kubenswrapper[4896]: I0126 17:23:48.815242 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe" gracePeriod=600 Jan 26 17:23:49 crc kubenswrapper[4896]: I0126 17:23:49.183626 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe"} Jan 26 17:23:49 crc kubenswrapper[4896]: I0126 17:23:49.183557 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe" exitCode=0 Jan 26 17:23:49 crc kubenswrapper[4896]: I0126 17:23:49.184033 4896 scope.go:117] "RemoveContainer" containerID="fef8e649f3d3008e11bf5bf492f0177bb3bcf49bf01ef1e253e803d57f696c5f" Jan 26 17:23:50 crc kubenswrapper[4896]: I0126 17:23:50.198150 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee"} Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.027550 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:04 crc kubenswrapper[4896]: E0126 17:24:04.028980 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="extract-content" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.029005 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="extract-content" Jan 26 17:24:04 crc kubenswrapper[4896]: E0126 17:24:04.029162 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="extract-utilities" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.029184 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="extract-utilities" Jan 26 17:24:04 crc kubenswrapper[4896]: E0126 17:24:04.029207 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="registry-server" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.029214 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="registry-server" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.029718 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a795e42-3c2f-4bab-ba8a-310c32ddc785" containerName="registry-server" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.033096 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.041869 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.042158 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr682\" (UniqueName: \"kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.042298 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.042594 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.146637 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr682\" (UniqueName: \"kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.146772 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.146975 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.148994 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.149044 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.193392 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr682\" (UniqueName: \"kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682\") pod \"redhat-operators-rqbhx\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:04 crc kubenswrapper[4896]: I0126 17:24:04.362364 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:05 crc kubenswrapper[4896]: I0126 17:24:05.465944 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:05 crc kubenswrapper[4896]: W0126 17:24:05.492173 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb25050_da83_4c2d_a3ac_a1eb2dee6ed2.slice/crio-200e898448d002102dc270b3ee26beaae5f6fabb9c3266903febc2e3dbb678b0 WatchSource:0}: Error finding container 200e898448d002102dc270b3ee26beaae5f6fabb9c3266903febc2e3dbb678b0: Status 404 returned error can't find the container with id 200e898448d002102dc270b3ee26beaae5f6fabb9c3266903febc2e3dbb678b0 Jan 26 17:24:06 crc kubenswrapper[4896]: I0126 17:24:06.389075 4896 generic.go:334] "Generic (PLEG): container finished" podID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerID="f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2" exitCode=0 Jan 26 17:24:06 crc kubenswrapper[4896]: I0126 17:24:06.389159 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerDied","Data":"f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2"} Jan 26 17:24:06 crc kubenswrapper[4896]: I0126 17:24:06.389531 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerStarted","Data":"200e898448d002102dc270b3ee26beaae5f6fabb9c3266903febc2e3dbb678b0"} Jan 26 17:24:07 crc kubenswrapper[4896]: I0126 17:24:07.404391 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerStarted","Data":"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47"} Jan 26 17:24:12 crc kubenswrapper[4896]: I0126 17:24:12.618050 4896 generic.go:334] "Generic (PLEG): container finished" podID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerID="29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47" exitCode=0 Jan 26 17:24:12 crc kubenswrapper[4896]: I0126 17:24:12.618665 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerDied","Data":"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47"} Jan 26 17:24:13 crc kubenswrapper[4896]: I0126 17:24:13.633799 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerStarted","Data":"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68"} Jan 26 17:24:13 crc kubenswrapper[4896]: I0126 17:24:13.664653 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rqbhx" podStartSLOduration=4.023628881 podStartE2EDuration="10.663839467s" podCreationTimestamp="2026-01-26 17:24:03 +0000 UTC" firstStartedPulling="2026-01-26 17:24:06.39132575 +0000 UTC m=+6604.173206143" lastFinishedPulling="2026-01-26 17:24:13.031536336 +0000 UTC m=+6610.813416729" observedRunningTime="2026-01-26 17:24:13.65245833 +0000 UTC m=+6611.434338723" watchObservedRunningTime="2026-01-26 17:24:13.663839467 +0000 UTC m=+6611.445719850" Jan 26 17:24:14 crc kubenswrapper[4896]: I0126 17:24:14.363589 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:14 crc kubenswrapper[4896]: I0126 17:24:14.363650 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:15 crc kubenswrapper[4896]: I0126 17:24:15.427748 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rqbhx" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" probeResult="failure" output=< Jan 26 17:24:15 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:24:15 crc kubenswrapper[4896]: > Jan 26 17:24:26 crc kubenswrapper[4896]: I0126 17:24:26.375481 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rqbhx" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" probeResult="failure" output=< Jan 26 17:24:26 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:24:26 crc kubenswrapper[4896]: > Jan 26 17:24:35 crc kubenswrapper[4896]: I0126 17:24:35.427419 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rqbhx" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" probeResult="failure" output=< Jan 26 17:24:35 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:24:35 crc kubenswrapper[4896]: > Jan 26 17:24:44 crc kubenswrapper[4896]: I0126 17:24:44.676342 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:44 crc kubenswrapper[4896]: I0126 17:24:44.775688 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:45 crc kubenswrapper[4896]: I0126 17:24:45.086645 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:46 crc kubenswrapper[4896]: I0126 17:24:46.496092 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rqbhx" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" containerID="cri-o://1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68" gracePeriod=2 Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.489160 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.491991 4896 generic.go:334] "Generic (PLEG): container finished" podID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerID="1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68" exitCode=0 Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.492047 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerDied","Data":"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68"} Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.492268 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rqbhx" event={"ID":"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2","Type":"ContainerDied","Data":"200e898448d002102dc270b3ee26beaae5f6fabb9c3266903febc2e3dbb678b0"} Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.492499 4896 scope.go:117] "RemoveContainer" containerID="1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.515636 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities\") pod \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.515749 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content\") pod \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.515819 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr682\" (UniqueName: \"kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682\") pod \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\" (UID: \"edb25050-da83-4c2d-a3ac-a1eb2dee6ed2\") " Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.524792 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities" (OuterVolumeSpecName: "utilities") pod "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" (UID: "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.534985 4896 scope.go:117] "RemoveContainer" containerID="29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.537768 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682" (OuterVolumeSpecName: "kube-api-access-lr682") pod "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" (UID: "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2"). InnerVolumeSpecName "kube-api-access-lr682". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.619108 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.619147 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr682\" (UniqueName: \"kubernetes.io/projected/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-kube-api-access-lr682\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.677469 4896 scope.go:117] "RemoveContainer" containerID="f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.702320 4896 scope.go:117] "RemoveContainer" containerID="1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68" Jan 26 17:24:47 crc kubenswrapper[4896]: E0126 17:24:47.705035 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68\": container with ID starting with 1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68 not found: ID does not exist" containerID="1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.705121 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68"} err="failed to get container status \"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68\": rpc error: code = NotFound desc = could not find container \"1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68\": container with ID starting with 1e24e560d7d72d2602417d2f4be1f7a077c15a9518837e5082985ea9bdcb3a68 not found: ID does not exist" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.705150 4896 scope.go:117] "RemoveContainer" containerID="29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47" Jan 26 17:24:47 crc kubenswrapper[4896]: E0126 17:24:47.705758 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47\": container with ID starting with 29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47 not found: ID does not exist" containerID="29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.705800 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47"} err="failed to get container status \"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47\": rpc error: code = NotFound desc = could not find container \"29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47\": container with ID starting with 29f47eec24488afb2a9a7f332831504b0863cd206c64a2aa7b870e02db78ae47 not found: ID does not exist" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.705818 4896 scope.go:117] "RemoveContainer" containerID="f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2" Jan 26 17:24:47 crc kubenswrapper[4896]: E0126 17:24:47.706250 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2\": container with ID starting with f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2 not found: ID does not exist" containerID="f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.706295 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2"} err="failed to get container status \"f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2\": rpc error: code = NotFound desc = could not find container \"f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2\": container with ID starting with f638992c306b69bfed1368ff0ae5e4d673676e782b04641a27231d1521012ad2 not found: ID does not exist" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.712070 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" (UID: "edb25050-da83-4c2d-a3ac-a1eb2dee6ed2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:47 crc kubenswrapper[4896]: I0126 17:24:47.722694 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:48 crc kubenswrapper[4896]: I0126 17:24:48.503152 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rqbhx" Jan 26 17:24:48 crc kubenswrapper[4896]: I0126 17:24:48.548741 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:48 crc kubenswrapper[4896]: I0126 17:24:48.559979 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rqbhx"] Jan 26 17:24:48 crc kubenswrapper[4896]: I0126 17:24:48.776772 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" path="/var/lib/kubelet/pods/edb25050-da83-4c2d-a3ac-a1eb2dee6ed2/volumes" Jan 26 17:25:41 crc kubenswrapper[4896]: I0126 17:25:41.352720 4896 trace.go:236] Trace[1168552560]: "Calculate volume metrics of reloader for pod metallb-system/frr-k8s-klnvj" (26-Jan-2026 17:25:39.927) (total time: 1424ms): Jan 26 17:25:41 crc kubenswrapper[4896]: Trace[1168552560]: [1.424107277s] [1.424107277s] END Jan 26 17:25:41 crc kubenswrapper[4896]: I0126 17:25:41.383480 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-z7j4w" podUID="b3272d78-4dde-4997-9316-24a84c00f4c8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.329667 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:25:45 crc kubenswrapper[4896]: E0126 17:25:45.332254 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.332288 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" Jan 26 17:25:45 crc kubenswrapper[4896]: E0126 17:25:45.332321 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="extract-utilities" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.332327 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="extract-utilities" Jan 26 17:25:45 crc kubenswrapper[4896]: E0126 17:25:45.332338 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="extract-content" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.332345 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="extract-content" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.335960 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb25050-da83-4c2d-a3ac-a1eb2dee6ed2" containerName="registry-server" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.339763 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.353210 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.389573 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhtp\" (UniqueName: \"kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.389777 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.389821 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.492268 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.492702 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.492924 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhtp\" (UniqueName: \"kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.498340 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.498342 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.640480 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhtp\" (UniqueName: \"kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp\") pod \"community-operators-znpp5\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:45 crc kubenswrapper[4896]: I0126 17:25:45.674314 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:46 crc kubenswrapper[4896]: I0126 17:25:46.372301 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:25:47 crc kubenswrapper[4896]: I0126 17:25:47.035305 4896 generic.go:334] "Generic (PLEG): container finished" podID="1023048a-f996-448b-b930-ffe5a97baf8e" containerID="1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c" exitCode=0 Jan 26 17:25:47 crc kubenswrapper[4896]: I0126 17:25:47.035407 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerDied","Data":"1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c"} Jan 26 17:25:47 crc kubenswrapper[4896]: I0126 17:25:47.035802 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerStarted","Data":"0a727cc0990134a2fed6bd007c6713ff57e4f36edb7a7a135ab1b410379d12b0"} Jan 26 17:25:47 crc kubenswrapper[4896]: I0126 17:25:47.039703 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:25:49 crc kubenswrapper[4896]: I0126 17:25:49.060523 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerStarted","Data":"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3"} Jan 26 17:25:51 crc kubenswrapper[4896]: I0126 17:25:51.096568 4896 generic.go:334] "Generic (PLEG): container finished" podID="1023048a-f996-448b-b930-ffe5a97baf8e" containerID="6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3" exitCode=0 Jan 26 17:25:51 crc kubenswrapper[4896]: I0126 17:25:51.096786 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerDied","Data":"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3"} Jan 26 17:25:52 crc kubenswrapper[4896]: I0126 17:25:52.114382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerStarted","Data":"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18"} Jan 26 17:25:53 crc kubenswrapper[4896]: I0126 17:25:53.160551 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-znpp5" podStartSLOduration=3.442154955 podStartE2EDuration="8.16052375s" podCreationTimestamp="2026-01-26 17:25:45 +0000 UTC" firstStartedPulling="2026-01-26 17:25:47.038347324 +0000 UTC m=+6704.820227717" lastFinishedPulling="2026-01-26 17:25:51.756716119 +0000 UTC m=+6709.538596512" observedRunningTime="2026-01-26 17:25:53.148140108 +0000 UTC m=+6710.930020511" watchObservedRunningTime="2026-01-26 17:25:53.16052375 +0000 UTC m=+6710.942404143" Jan 26 17:25:55 crc kubenswrapper[4896]: I0126 17:25:55.675729 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:55 crc kubenswrapper[4896]: I0126 17:25:55.676340 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:25:56 crc kubenswrapper[4896]: I0126 17:25:56.736828 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-znpp5" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="registry-server" probeResult="failure" output=< Jan 26 17:25:56 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:25:56 crc kubenswrapper[4896]: > Jan 26 17:26:05 crc kubenswrapper[4896]: I0126 17:26:05.760136 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:26:05 crc kubenswrapper[4896]: I0126 17:26:05.823606 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:26:06 crc kubenswrapper[4896]: I0126 17:26:06.011497 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:26:07 crc kubenswrapper[4896]: I0126 17:26:07.312619 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-znpp5" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="registry-server" containerID="cri-o://9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18" gracePeriod=2 Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.040874 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.196838 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content\") pod \"1023048a-f996-448b-b930-ffe5a97baf8e\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.197040 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhtp\" (UniqueName: \"kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp\") pod \"1023048a-f996-448b-b930-ffe5a97baf8e\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.197093 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities\") pod \"1023048a-f996-448b-b930-ffe5a97baf8e\" (UID: \"1023048a-f996-448b-b930-ffe5a97baf8e\") " Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.198282 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities" (OuterVolumeSpecName: "utilities") pod "1023048a-f996-448b-b930-ffe5a97baf8e" (UID: "1023048a-f996-448b-b930-ffe5a97baf8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.204192 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp" (OuterVolumeSpecName: "kube-api-access-bkhtp") pod "1023048a-f996-448b-b930-ffe5a97baf8e" (UID: "1023048a-f996-448b-b930-ffe5a97baf8e"). InnerVolumeSpecName "kube-api-access-bkhtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.263268 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1023048a-f996-448b-b930-ffe5a97baf8e" (UID: "1023048a-f996-448b-b930-ffe5a97baf8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.300697 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.300733 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1023048a-f996-448b-b930-ffe5a97baf8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.300744 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhtp\" (UniqueName: \"kubernetes.io/projected/1023048a-f996-448b-b930-ffe5a97baf8e-kube-api-access-bkhtp\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.325882 4896 generic.go:334] "Generic (PLEG): container finished" podID="1023048a-f996-448b-b930-ffe5a97baf8e" containerID="9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18" exitCode=0 Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.325949 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znpp5" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.325946 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerDied","Data":"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18"} Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.326078 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znpp5" event={"ID":"1023048a-f996-448b-b930-ffe5a97baf8e","Type":"ContainerDied","Data":"0a727cc0990134a2fed6bd007c6713ff57e4f36edb7a7a135ab1b410379d12b0"} Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.326585 4896 scope.go:117] "RemoveContainer" containerID="9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.372470 4896 scope.go:117] "RemoveContainer" containerID="6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.378365 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.393513 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-znpp5"] Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.403189 4896 scope.go:117] "RemoveContainer" containerID="1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.457194 4896 scope.go:117] "RemoveContainer" containerID="9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18" Jan 26 17:26:08 crc kubenswrapper[4896]: E0126 17:26:08.458591 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18\": container with ID starting with 9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18 not found: ID does not exist" containerID="9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.458880 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18"} err="failed to get container status \"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18\": rpc error: code = NotFound desc = could not find container \"9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18\": container with ID starting with 9166a434d07765f288b4abc36ac50f21522b6c659954ffef6606af26aa408f18 not found: ID does not exist" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.458916 4896 scope.go:117] "RemoveContainer" containerID="6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3" Jan 26 17:26:08 crc kubenswrapper[4896]: E0126 17:26:08.459247 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3\": container with ID starting with 6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3 not found: ID does not exist" containerID="6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.459372 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3"} err="failed to get container status \"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3\": rpc error: code = NotFound desc = could not find container \"6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3\": container with ID starting with 6425a0ac98b845cc3683f317c990dd58d2a416e38cdabbe1cd17a9f85b1ca0c3 not found: ID does not exist" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.459457 4896 scope.go:117] "RemoveContainer" containerID="1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c" Jan 26 17:26:08 crc kubenswrapper[4896]: E0126 17:26:08.460040 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c\": container with ID starting with 1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c not found: ID does not exist" containerID="1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.460074 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c"} err="failed to get container status \"1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c\": rpc error: code = NotFound desc = could not find container \"1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c\": container with ID starting with 1cbab29a9ccc74f0d37cde4eed6993343c4de63fbccb592814664994ae606b6c not found: ID does not exist" Jan 26 17:26:08 crc kubenswrapper[4896]: I0126 17:26:08.777706 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" path="/var/lib/kubelet/pods/1023048a-f996-448b-b930-ffe5a97baf8e/volumes" Jan 26 17:26:18 crc kubenswrapper[4896]: I0126 17:26:18.813659 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:26:18 crc kubenswrapper[4896]: I0126 17:26:18.814289 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.225848 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6s2j9"] Jan 26 17:26:36 crc kubenswrapper[4896]: E0126 17:26:36.226927 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="registry-server" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.226943 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="registry-server" Jan 26 17:26:36 crc kubenswrapper[4896]: E0126 17:26:36.226984 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="extract-content" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.226993 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="extract-content" Jan 26 17:26:36 crc kubenswrapper[4896]: E0126 17:26:36.227012 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="extract-utilities" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.227019 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="extract-utilities" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.233086 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1023048a-f996-448b-b930-ffe5a97baf8e" containerName="registry-server" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.239141 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.248246 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6s2j9"] Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.307987 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhxt\" (UniqueName: \"kubernetes.io/projected/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-kube-api-access-bvhxt\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.308107 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-utilities\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.308183 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-catalog-content\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.411094 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-catalog-content\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.411466 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhxt\" (UniqueName: \"kubernetes.io/projected/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-kube-api-access-bvhxt\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.411621 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-utilities\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.412317 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-catalog-content\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.412339 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-utilities\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.436811 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhxt\" (UniqueName: \"kubernetes.io/projected/caa51786-7a20-4cf3-9d57-bc54eb8ca9e9-kube-api-access-bvhxt\") pod \"certified-operators-6s2j9\" (UID: \"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9\") " pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:36 crc kubenswrapper[4896]: I0126 17:26:36.571566 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:37 crc kubenswrapper[4896]: I0126 17:26:37.225763 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6s2j9"] Jan 26 17:26:38 crc kubenswrapper[4896]: I0126 17:26:38.189953 4896 generic.go:334] "Generic (PLEG): container finished" podID="caa51786-7a20-4cf3-9d57-bc54eb8ca9e9" containerID="f6affd4c9c8bc8d0ae87fdf847f87a2237f3d3c210ce6706815e40ff5891a770" exitCode=0 Jan 26 17:26:38 crc kubenswrapper[4896]: I0126 17:26:38.190035 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s2j9" event={"ID":"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9","Type":"ContainerDied","Data":"f6affd4c9c8bc8d0ae87fdf847f87a2237f3d3c210ce6706815e40ff5891a770"} Jan 26 17:26:38 crc kubenswrapper[4896]: I0126 17:26:38.190259 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s2j9" event={"ID":"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9","Type":"ContainerStarted","Data":"f23e19acd583bc50d549130babea23b7f9a3560dec40b21d364dbf7836649469"} Jan 26 17:26:44 crc kubenswrapper[4896]: I0126 17:26:44.323451 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s2j9" event={"ID":"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9","Type":"ContainerStarted","Data":"9b57a856c5f6dcacada23cab51d5c215fd07bf02ea21c06462ee094794dd2777"} Jan 26 17:26:46 crc kubenswrapper[4896]: I0126 17:26:46.348336 4896 generic.go:334] "Generic (PLEG): container finished" podID="caa51786-7a20-4cf3-9d57-bc54eb8ca9e9" containerID="9b57a856c5f6dcacada23cab51d5c215fd07bf02ea21c06462ee094794dd2777" exitCode=0 Jan 26 17:26:46 crc kubenswrapper[4896]: I0126 17:26:46.348410 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s2j9" event={"ID":"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9","Type":"ContainerDied","Data":"9b57a856c5f6dcacada23cab51d5c215fd07bf02ea21c06462ee094794dd2777"} Jan 26 17:26:47 crc kubenswrapper[4896]: I0126 17:26:47.365047 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6s2j9" event={"ID":"caa51786-7a20-4cf3-9d57-bc54eb8ca9e9","Type":"ContainerStarted","Data":"0a90138bedd288559fdb179b28a84cee955ba872288bd926a52a71499665aaf8"} Jan 26 17:26:47 crc kubenswrapper[4896]: I0126 17:26:47.388236 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6s2j9" podStartSLOduration=2.546794551 podStartE2EDuration="11.38821433s" podCreationTimestamp="2026-01-26 17:26:36 +0000 UTC" firstStartedPulling="2026-01-26 17:26:38.193198255 +0000 UTC m=+6755.975078668" lastFinishedPulling="2026-01-26 17:26:47.034618044 +0000 UTC m=+6764.816498447" observedRunningTime="2026-01-26 17:26:47.38622517 +0000 UTC m=+6765.168105603" watchObservedRunningTime="2026-01-26 17:26:47.38821433 +0000 UTC m=+6765.170094733" Jan 26 17:26:48 crc kubenswrapper[4896]: I0126 17:26:48.813625 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:26:48 crc kubenswrapper[4896]: I0126 17:26:48.814136 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:26:56 crc kubenswrapper[4896]: I0126 17:26:56.573919 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:56 crc kubenswrapper[4896]: I0126 17:26:56.575563 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:56 crc kubenswrapper[4896]: I0126 17:26:56.644488 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:57 crc kubenswrapper[4896]: I0126 17:26:57.538886 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6s2j9" Jan 26 17:26:57 crc kubenswrapper[4896]: I0126 17:26:57.629875 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6s2j9"] Jan 26 17:26:57 crc kubenswrapper[4896]: I0126 17:26:57.676948 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 17:26:57 crc kubenswrapper[4896]: I0126 17:26:57.678363 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hnsk" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="registry-server" containerID="cri-o://2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" gracePeriod=2 Jan 26 17:26:57 crc kubenswrapper[4896]: E0126 17:26:57.852949 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c is running failed: container process not found" containerID="2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:26:57 crc kubenswrapper[4896]: E0126 17:26:57.853809 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c is running failed: container process not found" containerID="2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:26:57 crc kubenswrapper[4896]: E0126 17:26:57.854909 4896 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c is running failed: container process not found" containerID="2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:26:57 crc kubenswrapper[4896]: E0126 17:26:57.854970 4896 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6hnsk" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="registry-server" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.503461 4896 generic.go:334] "Generic (PLEG): container finished" podID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerID="2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" exitCode=0 Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.503634 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerDied","Data":"2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c"} Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.504204 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hnsk" event={"ID":"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6","Type":"ContainerDied","Data":"d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3"} Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.504563 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f339b15cce5ae474e12ff7b1877b44c2baa0a973a838da8e70465534ac28d3" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.504833 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.590438 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content\") pod \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.590709 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mscqv\" (UniqueName: \"kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv\") pod \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.590803 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities\") pod \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\" (UID: \"1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6\") " Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.592782 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities" (OuterVolumeSpecName: "utilities") pod "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" (UID: "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.602487 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv" (OuterVolumeSpecName: "kube-api-access-mscqv") pod "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" (UID: "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6"). InnerVolumeSpecName "kube-api-access-mscqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.694346 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mscqv\" (UniqueName: \"kubernetes.io/projected/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-kube-api-access-mscqv\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.694383 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.695182 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" (UID: "1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:58 crc kubenswrapper[4896]: I0126 17:26:58.797131 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:59 crc kubenswrapper[4896]: I0126 17:26:59.514710 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hnsk" Jan 26 17:26:59 crc kubenswrapper[4896]: I0126 17:26:59.546477 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 17:26:59 crc kubenswrapper[4896]: I0126 17:26:59.569202 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hnsk"] Jan 26 17:27:00 crc kubenswrapper[4896]: I0126 17:27:00.780555 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" path="/var/lib/kubelet/pods/1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6/volumes" Jan 26 17:27:18 crc kubenswrapper[4896]: I0126 17:27:18.815270 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:27:18 crc kubenswrapper[4896]: I0126 17:27:18.815983 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:27:18 crc kubenswrapper[4896]: I0126 17:27:18.816048 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:27:18 crc kubenswrapper[4896]: I0126 17:27:18.817286 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:27:18 crc kubenswrapper[4896]: I0126 17:27:18.817351 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" gracePeriod=600 Jan 26 17:27:18 crc kubenswrapper[4896]: E0126 17:27:18.940891 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:27:19 crc kubenswrapper[4896]: I0126 17:27:19.778431 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" exitCode=0 Jan 26 17:27:19 crc kubenswrapper[4896]: I0126 17:27:19.778492 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee"} Jan 26 17:27:19 crc kubenswrapper[4896]: I0126 17:27:19.778535 4896 scope.go:117] "RemoveContainer" containerID="5498b7b06ed7156c181b13fe4996b8248646474dc37ef36b3f5ec3c4b29ceafe" Jan 26 17:27:19 crc kubenswrapper[4896]: I0126 17:27:19.779507 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:27:19 crc kubenswrapper[4896]: E0126 17:27:19.780053 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:27:30 crc kubenswrapper[4896]: I0126 17:27:30.733694 4896 scope.go:117] "RemoveContainer" containerID="fef2e9e0f1b515f63b0944b839640aa089fbfc7585945412e60f6e9ea96ee846" Jan 26 17:27:30 crc kubenswrapper[4896]: I0126 17:27:30.814258 4896 scope.go:117] "RemoveContainer" containerID="2d90e19bed6c56e5529799d34677220fe45100aebf7dd2c5b120ef262e68c85c" Jan 26 17:27:30 crc kubenswrapper[4896]: I0126 17:27:30.846379 4896 scope.go:117] "RemoveContainer" containerID="bda57c4830acfac1d49dac7c148ddc8094aed9118e79c85b987105f6559bd201" Jan 26 17:27:31 crc kubenswrapper[4896]: I0126 17:27:31.760188 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:27:31 crc kubenswrapper[4896]: E0126 17:27:31.760741 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:27:42 crc kubenswrapper[4896]: I0126 17:27:42.768215 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:27:42 crc kubenswrapper[4896]: E0126 17:27:42.769154 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:27:57 crc kubenswrapper[4896]: I0126 17:27:57.759831 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:27:57 crc kubenswrapper[4896]: E0126 17:27:57.760644 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:28:08 crc kubenswrapper[4896]: I0126 17:28:08.759832 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:28:08 crc kubenswrapper[4896]: E0126 17:28:08.760869 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:28:19 crc kubenswrapper[4896]: I0126 17:28:19.760675 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:28:19 crc kubenswrapper[4896]: E0126 17:28:19.761769 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:28:32 crc kubenswrapper[4896]: I0126 17:28:32.783701 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:28:32 crc kubenswrapper[4896]: E0126 17:28:32.784541 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:28:43 crc kubenswrapper[4896]: I0126 17:28:43.759238 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:28:43 crc kubenswrapper[4896]: E0126 17:28:43.760132 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:28:56 crc kubenswrapper[4896]: I0126 17:28:56.759480 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:28:56 crc kubenswrapper[4896]: E0126 17:28:56.760274 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:29:08 crc kubenswrapper[4896]: I0126 17:29:08.759899 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:29:08 crc kubenswrapper[4896]: E0126 17:29:08.760792 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:29:23 crc kubenswrapper[4896]: I0126 17:29:23.760990 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:29:23 crc kubenswrapper[4896]: E0126 17:29:23.762334 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:29:30 crc kubenswrapper[4896]: I0126 17:29:30.966514 4896 scope.go:117] "RemoveContainer" containerID="c6d2f8518a45a721942b7fb6a0857c98d74f5a2ef350371e4c1c6858f6572d41" Jan 26 17:29:31 crc kubenswrapper[4896]: I0126 17:29:31.016399 4896 scope.go:117] "RemoveContainer" containerID="89e75e369b5e0e0f89539c90f8d600c6991d12a4b03eb941c96a5ca57549dc87" Jan 26 17:29:31 crc kubenswrapper[4896]: I0126 17:29:31.078270 4896 scope.go:117] "RemoveContainer" containerID="69898ca66aaf9180313592b0746a8cd893e46ed3f4cb8dc2a5589829b0a5e9e1" Jan 26 17:29:36 crc kubenswrapper[4896]: I0126 17:29:36.760304 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:29:36 crc kubenswrapper[4896]: E0126 17:29:36.762012 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:29:49 crc kubenswrapper[4896]: I0126 17:29:49.760360 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:29:49 crc kubenswrapper[4896]: E0126 17:29:49.761458 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.221131 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6"] Jan 26 17:30:00 crc kubenswrapper[4896]: E0126 17:30:00.222258 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.222275 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4896]: E0126 17:30:00.222310 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.222316 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4896]: E0126 17:30:00.222323 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.222329 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.222631 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0925a6-a58f-4bf2-9d14-dbdc04e4d6a6" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.223759 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.233439 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6"] Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.249354 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.250114 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.326666 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.326926 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.327192 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4h6\" (UniqueName: \"kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.428814 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4h6\" (UniqueName: \"kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.429012 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.429061 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.430082 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.438972 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.457717 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4h6\" (UniqueName: \"kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6\") pod \"collect-profiles-29490810-m6hh6\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.549503 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:00 crc kubenswrapper[4896]: I0126 17:30:00.763126 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:30:00 crc kubenswrapper[4896]: E0126 17:30:00.764081 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:30:01 crc kubenswrapper[4896]: I0126 17:30:01.113125 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6"] Jan 26 17:30:01 crc kubenswrapper[4896]: I0126 17:30:01.951240 4896 generic.go:334] "Generic (PLEG): container finished" podID="74d12f68-be12-48ca-b465-bec942483a5b" containerID="588b81679eb73f8f6ef9a2ff2977f46f343275868868448d998cf8f593d13232" exitCode=0 Jan 26 17:30:01 crc kubenswrapper[4896]: I0126 17:30:01.951302 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" event={"ID":"74d12f68-be12-48ca-b465-bec942483a5b","Type":"ContainerDied","Data":"588b81679eb73f8f6ef9a2ff2977f46f343275868868448d998cf8f593d13232"} Jan 26 17:30:01 crc kubenswrapper[4896]: I0126 17:30:01.951558 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" event={"ID":"74d12f68-be12-48ca-b465-bec942483a5b","Type":"ContainerStarted","Data":"80ede5fba55cd829f425e335543bcb9de1e39f57e683ac18ac9c852460423aca"} Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.442429 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.605458 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume\") pod \"74d12f68-be12-48ca-b465-bec942483a5b\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.605560 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd4h6\" (UniqueName: \"kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6\") pod \"74d12f68-be12-48ca-b465-bec942483a5b\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.605746 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume\") pod \"74d12f68-be12-48ca-b465-bec942483a5b\" (UID: \"74d12f68-be12-48ca-b465-bec942483a5b\") " Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.607167 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "74d12f68-be12-48ca-b465-bec942483a5b" (UID: "74d12f68-be12-48ca-b465-bec942483a5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.615406 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6" (OuterVolumeSpecName: "kube-api-access-gd4h6") pod "74d12f68-be12-48ca-b465-bec942483a5b" (UID: "74d12f68-be12-48ca-b465-bec942483a5b"). InnerVolumeSpecName "kube-api-access-gd4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.619206 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74d12f68-be12-48ca-b465-bec942483a5b" (UID: "74d12f68-be12-48ca-b465-bec942483a5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.709942 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd4h6\" (UniqueName: \"kubernetes.io/projected/74d12f68-be12-48ca-b465-bec942483a5b-kube-api-access-gd4h6\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.709983 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d12f68-be12-48ca-b465-bec942483a5b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.709992 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d12f68-be12-48ca-b465-bec942483a5b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.977921 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" event={"ID":"74d12f68-be12-48ca-b465-bec942483a5b","Type":"ContainerDied","Data":"80ede5fba55cd829f425e335543bcb9de1e39f57e683ac18ac9c852460423aca"} Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.977973 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ede5fba55cd829f425e335543bcb9de1e39f57e683ac18ac9c852460423aca" Jan 26 17:30:03 crc kubenswrapper[4896]: I0126 17:30:03.977996 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-m6hh6" Jan 26 17:30:04 crc kubenswrapper[4896]: I0126 17:30:04.540775 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v"] Jan 26 17:30:04 crc kubenswrapper[4896]: I0126 17:30:04.552302 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-bwv4v"] Jan 26 17:30:04 crc kubenswrapper[4896]: I0126 17:30:04.773443 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec655b4-80c3-4368-8077-13b6c2a5294b" path="/var/lib/kubelet/pods/bec655b4-80c3-4368-8077-13b6c2a5294b/volumes" Jan 26 17:30:11 crc kubenswrapper[4896]: I0126 17:30:11.759176 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:30:11 crc kubenswrapper[4896]: E0126 17:30:11.759919 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:30:23 crc kubenswrapper[4896]: I0126 17:30:23.759865 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:30:23 crc kubenswrapper[4896]: E0126 17:30:23.760787 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:30:31 crc kubenswrapper[4896]: I0126 17:30:31.164306 4896 scope.go:117] "RemoveContainer" containerID="33f8f4dba38734ff04ec28eaedb778f781cef94a58b5bb582a5697f61f718cdf" Jan 26 17:30:36 crc kubenswrapper[4896]: I0126 17:30:36.759623 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:30:36 crc kubenswrapper[4896]: E0126 17:30:36.760493 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:30:49 crc kubenswrapper[4896]: I0126 17:30:49.760729 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:30:49 crc kubenswrapper[4896]: E0126 17:30:49.762013 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:31:03 crc kubenswrapper[4896]: I0126 17:31:03.759327 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:31:03 crc kubenswrapper[4896]: E0126 17:31:03.760326 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:31:17 crc kubenswrapper[4896]: I0126 17:31:17.760175 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:31:17 crc kubenswrapper[4896]: E0126 17:31:17.761191 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:31:31 crc kubenswrapper[4896]: I0126 17:31:31.665698 4896 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-686bd9bf85-wbdcn" podUID="c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 17:31:31 crc kubenswrapper[4896]: I0126 17:31:31.760055 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:31:31 crc kubenswrapper[4896]: E0126 17:31:31.760526 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:31:45 crc kubenswrapper[4896]: I0126 17:31:45.824491 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:31:45 crc kubenswrapper[4896]: E0126 17:31:45.825320 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:31:57 crc kubenswrapper[4896]: I0126 17:31:57.761100 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:31:57 crc kubenswrapper[4896]: E0126 17:31:57.762303 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:32:12 crc kubenswrapper[4896]: I0126 17:32:12.837076 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:32:12 crc kubenswrapper[4896]: E0126 17:32:12.837797 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:32:27 crc kubenswrapper[4896]: I0126 17:32:27.760039 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:32:28 crc kubenswrapper[4896]: I0126 17:32:28.783513 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f"} Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.851728 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:30 crc kubenswrapper[4896]: E0126 17:33:30.853428 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d12f68-be12-48ca-b465-bec942483a5b" containerName="collect-profiles" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.853453 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d12f68-be12-48ca-b465-bec942483a5b" containerName="collect-profiles" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.853824 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d12f68-be12-48ca-b465-bec942483a5b" containerName="collect-profiles" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.858685 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.872995 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.935779 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.935883 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:30 crc kubenswrapper[4896]: I0126 17:33:30.936075 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8ll\" (UniqueName: \"kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.038183 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8ll\" (UniqueName: \"kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.038354 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.038418 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.040108 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.040372 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.073998 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8ll\" (UniqueName: \"kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll\") pod \"redhat-marketplace-vgsjn\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.193417 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.960074 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:31 crc kubenswrapper[4896]: I0126 17:33:31.991736 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerStarted","Data":"b7bcf19e62aa7ac9d4a06c187c844a74db5e0a71bb153a636fd201a00805bc77"} Jan 26 17:33:33 crc kubenswrapper[4896]: I0126 17:33:33.018190 4896 generic.go:334] "Generic (PLEG): container finished" podID="950ff69a-439e-4144-8717-1ee33270a4c3" containerID="edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037" exitCode=0 Jan 26 17:33:33 crc kubenswrapper[4896]: I0126 17:33:33.018556 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerDied","Data":"edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037"} Jan 26 17:33:33 crc kubenswrapper[4896]: I0126 17:33:33.023129 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:33:35 crc kubenswrapper[4896]: I0126 17:33:35.050902 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerStarted","Data":"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853"} Jan 26 17:33:36 crc kubenswrapper[4896]: I0126 17:33:36.065554 4896 generic.go:334] "Generic (PLEG): container finished" podID="950ff69a-439e-4144-8717-1ee33270a4c3" containerID="f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853" exitCode=0 Jan 26 17:33:36 crc kubenswrapper[4896]: I0126 17:33:36.065622 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerDied","Data":"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853"} Jan 26 17:33:37 crc kubenswrapper[4896]: I0126 17:33:37.080405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerStarted","Data":"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4"} Jan 26 17:33:37 crc kubenswrapper[4896]: I0126 17:33:37.114378 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vgsjn" podStartSLOduration=3.695825712 podStartE2EDuration="7.114149226s" podCreationTimestamp="2026-01-26 17:33:30 +0000 UTC" firstStartedPulling="2026-01-26 17:33:33.021811416 +0000 UTC m=+7170.803691819" lastFinishedPulling="2026-01-26 17:33:36.44013495 +0000 UTC m=+7174.222015333" observedRunningTime="2026-01-26 17:33:37.102660285 +0000 UTC m=+7174.884540678" watchObservedRunningTime="2026-01-26 17:33:37.114149226 +0000 UTC m=+7174.896029619" Jan 26 17:33:41 crc kubenswrapper[4896]: I0126 17:33:41.194332 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:41 crc kubenswrapper[4896]: I0126 17:33:41.194820 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:41 crc kubenswrapper[4896]: I0126 17:33:41.246569 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:42 crc kubenswrapper[4896]: I0126 17:33:42.246328 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:42 crc kubenswrapper[4896]: I0126 17:33:42.309557 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.184884 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vgsjn" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="registry-server" containerID="cri-o://bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4" gracePeriod=2 Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.759864 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.863792 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities\") pod \"950ff69a-439e-4144-8717-1ee33270a4c3\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.865222 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities" (OuterVolumeSpecName: "utilities") pod "950ff69a-439e-4144-8717-1ee33270a4c3" (UID: "950ff69a-439e-4144-8717-1ee33270a4c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.866958 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content\") pod \"950ff69a-439e-4144-8717-1ee33270a4c3\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.867103 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp8ll\" (UniqueName: \"kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll\") pod \"950ff69a-439e-4144-8717-1ee33270a4c3\" (UID: \"950ff69a-439e-4144-8717-1ee33270a4c3\") " Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.868602 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.882534 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll" (OuterVolumeSpecName: "kube-api-access-sp8ll") pod "950ff69a-439e-4144-8717-1ee33270a4c3" (UID: "950ff69a-439e-4144-8717-1ee33270a4c3"). InnerVolumeSpecName "kube-api-access-sp8ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.893948 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "950ff69a-439e-4144-8717-1ee33270a4c3" (UID: "950ff69a-439e-4144-8717-1ee33270a4c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.970682 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950ff69a-439e-4144-8717-1ee33270a4c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:44 crc kubenswrapper[4896]: I0126 17:33:44.970726 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp8ll\" (UniqueName: \"kubernetes.io/projected/950ff69a-439e-4144-8717-1ee33270a4c3-kube-api-access-sp8ll\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.197767 4896 generic.go:334] "Generic (PLEG): container finished" podID="950ff69a-439e-4144-8717-1ee33270a4c3" containerID="bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4" exitCode=0 Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.197824 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerDied","Data":"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4"} Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.197840 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgsjn" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.197857 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgsjn" event={"ID":"950ff69a-439e-4144-8717-1ee33270a4c3","Type":"ContainerDied","Data":"b7bcf19e62aa7ac9d4a06c187c844a74db5e0a71bb153a636fd201a00805bc77"} Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.197894 4896 scope.go:117] "RemoveContainer" containerID="bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.233572 4896 scope.go:117] "RemoveContainer" containerID="f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.246434 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.260695 4896 scope.go:117] "RemoveContainer" containerID="edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.261815 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgsjn"] Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.337220 4896 scope.go:117] "RemoveContainer" containerID="bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4" Jan 26 17:33:45 crc kubenswrapper[4896]: E0126 17:33:45.338249 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4\": container with ID starting with bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4 not found: ID does not exist" containerID="bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.338418 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4"} err="failed to get container status \"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4\": rpc error: code = NotFound desc = could not find container \"bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4\": container with ID starting with bd056c7601e9578d1471217dd254cda1080dc9a54570255fe400de0a817a84c4 not found: ID does not exist" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.338453 4896 scope.go:117] "RemoveContainer" containerID="f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853" Jan 26 17:33:45 crc kubenswrapper[4896]: E0126 17:33:45.338796 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853\": container with ID starting with f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853 not found: ID does not exist" containerID="f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.338836 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853"} err="failed to get container status \"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853\": rpc error: code = NotFound desc = could not find container \"f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853\": container with ID starting with f2ac48ae6bea74456267cdd787d4c390e28d74cf712af74015dea4592fe22853 not found: ID does not exist" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.338857 4896 scope.go:117] "RemoveContainer" containerID="edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037" Jan 26 17:33:45 crc kubenswrapper[4896]: E0126 17:33:45.339126 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037\": container with ID starting with edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037 not found: ID does not exist" containerID="edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037" Jan 26 17:33:45 crc kubenswrapper[4896]: I0126 17:33:45.339156 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037"} err="failed to get container status \"edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037\": rpc error: code = NotFound desc = could not find container \"edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037\": container with ID starting with edec0be96bd80e4ce497a58f05a4acbbe2bcce81f1b48f65ca6364a94d75e037 not found: ID does not exist" Jan 26 17:33:46 crc kubenswrapper[4896]: I0126 17:33:46.777562 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" path="/var/lib/kubelet/pods/950ff69a-439e-4144-8717-1ee33270a4c3/volumes" Jan 26 17:34:48 crc kubenswrapper[4896]: I0126 17:34:48.813975 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:34:48 crc kubenswrapper[4896]: I0126 17:34:48.814752 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:35:18 crc kubenswrapper[4896]: I0126 17:35:18.813994 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:35:18 crc kubenswrapper[4896]: I0126 17:35:18.814986 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:35:48 crc kubenswrapper[4896]: I0126 17:35:48.814257 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:35:48 crc kubenswrapper[4896]: I0126 17:35:48.814845 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:35:48 crc kubenswrapper[4896]: I0126 17:35:48.814890 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:35:48 crc kubenswrapper[4896]: I0126 17:35:48.816122 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:35:48 crc kubenswrapper[4896]: I0126 17:35:48.816190 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f" gracePeriod=600 Jan 26 17:35:49 crc kubenswrapper[4896]: I0126 17:35:49.703523 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f" exitCode=0 Jan 26 17:35:49 crc kubenswrapper[4896]: I0126 17:35:49.704260 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f"} Jan 26 17:35:49 crc kubenswrapper[4896]: I0126 17:35:49.704305 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca"} Jan 26 17:35:49 crc kubenswrapper[4896]: I0126 17:35:49.704328 4896 scope.go:117] "RemoveContainer" containerID="94dd37e0db0e325f10b0641524cfd61f5025a0c0cafea55935f4eb3516c93bee" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.069934 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:35:50 crc kubenswrapper[4896]: E0126 17:35:50.072314 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="extract-utilities" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.072385 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="extract-utilities" Jan 26 17:35:50 crc kubenswrapper[4896]: E0126 17:35:50.072457 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="extract-content" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.072465 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="extract-content" Jan 26 17:35:50 crc kubenswrapper[4896]: E0126 17:35:50.072501 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="registry-server" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.072507 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="registry-server" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.072911 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="950ff69a-439e-4144-8717-1ee33270a4c3" containerName="registry-server" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.081458 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.087902 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.168260 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.171976 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.183271 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.228343 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.228801 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.229117 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shfss\" (UniqueName: \"kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.331687 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.331844 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.331897 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.331921 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.331946 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh859\" (UniqueName: \"kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.332042 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shfss\" (UniqueName: \"kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.332943 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.333233 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.361888 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shfss\" (UniqueName: \"kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss\") pod \"redhat-operators-kkm6l\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.419486 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.434893 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh859\" (UniqueName: \"kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.435110 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.435229 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.435856 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.443443 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.480241 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh859\" (UniqueName: \"kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859\") pod \"community-operators-zwdw6\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:50 crc kubenswrapper[4896]: I0126 17:35:50.504727 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.108616 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.435095 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:35:51 crc kubenswrapper[4896]: W0126 17:35:51.501308 4896 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod033f09ef_c4da_4cff_9ea5_df3664ace7ef.slice/crio-5a228ec7fc3b6437160dcb203f8c3c5eccc82c2556cdb82f9e6c6fb4250ec216 WatchSource:0}: Error finding container 5a228ec7fc3b6437160dcb203f8c3c5eccc82c2556cdb82f9e6c6fb4250ec216: Status 404 returned error can't find the container with id 5a228ec7fc3b6437160dcb203f8c3c5eccc82c2556cdb82f9e6c6fb4250ec216 Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.830690 4896 generic.go:334] "Generic (PLEG): container finished" podID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerID="0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad" exitCode=0 Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.830785 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerDied","Data":"0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad"} Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.830987 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerStarted","Data":"b0cbb090f7e558e9a31bb8a996dc25acd73e3f93a3eaf033bed801ad2522a21a"} Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.834867 4896 generic.go:334] "Generic (PLEG): container finished" podID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerID="5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec" exitCode=0 Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.834922 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerDied","Data":"5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec"} Jan 26 17:35:51 crc kubenswrapper[4896]: I0126 17:35:51.834953 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerStarted","Data":"5a228ec7fc3b6437160dcb203f8c3c5eccc82c2556cdb82f9e6c6fb4250ec216"} Jan 26 17:35:52 crc kubenswrapper[4896]: I0126 17:35:52.855489 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerStarted","Data":"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a"} Jan 26 17:35:52 crc kubenswrapper[4896]: I0126 17:35:52.859711 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerStarted","Data":"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375"} Jan 26 17:35:55 crc kubenswrapper[4896]: I0126 17:35:55.899108 4896 generic.go:334] "Generic (PLEG): container finished" podID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerID="ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375" exitCode=0 Jan 26 17:35:55 crc kubenswrapper[4896]: I0126 17:35:55.899230 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerDied","Data":"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375"} Jan 26 17:35:59 crc kubenswrapper[4896]: I0126 17:35:59.008866 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerStarted","Data":"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd"} Jan 26 17:36:00 crc kubenswrapper[4896]: I0126 17:36:00.021142 4896 generic.go:334] "Generic (PLEG): container finished" podID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerID="8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a" exitCode=0 Jan 26 17:36:00 crc kubenswrapper[4896]: I0126 17:36:00.021208 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerDied","Data":"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a"} Jan 26 17:36:00 crc kubenswrapper[4896]: I0126 17:36:00.081857 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zwdw6" podStartSLOduration=3.381510506 podStartE2EDuration="10.081835833s" podCreationTimestamp="2026-01-26 17:35:50 +0000 UTC" firstStartedPulling="2026-01-26 17:35:51.837778793 +0000 UTC m=+7309.619659186" lastFinishedPulling="2026-01-26 17:35:58.5381041 +0000 UTC m=+7316.319984513" observedRunningTime="2026-01-26 17:36:00.065654238 +0000 UTC m=+7317.847534641" watchObservedRunningTime="2026-01-26 17:36:00.081835833 +0000 UTC m=+7317.863716226" Jan 26 17:36:00 crc kubenswrapper[4896]: I0126 17:36:00.505215 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:00 crc kubenswrapper[4896]: I0126 17:36:00.505259 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:01 crc kubenswrapper[4896]: I0126 17:36:01.035372 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerStarted","Data":"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731"} Jan 26 17:36:01 crc kubenswrapper[4896]: I0126 17:36:01.064192 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kkm6l" podStartSLOduration=3.451735461 podStartE2EDuration="12.064159825s" podCreationTimestamp="2026-01-26 17:35:49 +0000 UTC" firstStartedPulling="2026-01-26 17:35:51.832764091 +0000 UTC m=+7309.614644484" lastFinishedPulling="2026-01-26 17:36:00.445188455 +0000 UTC m=+7318.227068848" observedRunningTime="2026-01-26 17:36:01.057675446 +0000 UTC m=+7318.839555869" watchObservedRunningTime="2026-01-26 17:36:01.064159825 +0000 UTC m=+7318.846040238" Jan 26 17:36:01 crc kubenswrapper[4896]: I0126 17:36:01.559947 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zwdw6" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="registry-server" probeResult="failure" output=< Jan 26 17:36:01 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:36:01 crc kubenswrapper[4896]: > Jan 26 17:36:10 crc kubenswrapper[4896]: I0126 17:36:10.420567 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:10 crc kubenswrapper[4896]: I0126 17:36:10.421570 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:10 crc kubenswrapper[4896]: I0126 17:36:10.556950 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:10 crc kubenswrapper[4896]: I0126 17:36:10.613038 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:10 crc kubenswrapper[4896]: I0126 17:36:10.809321 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:36:11 crc kubenswrapper[4896]: I0126 17:36:11.480169 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkm6l" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="registry-server" probeResult="failure" output=< Jan 26 17:36:11 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:36:11 crc kubenswrapper[4896]: > Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.201755 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zwdw6" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="registry-server" containerID="cri-o://049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd" gracePeriod=2 Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.795857 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.929827 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content\") pod \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.930114 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh859\" (UniqueName: \"kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859\") pod \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.930248 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities\") pod \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\" (UID: \"033f09ef-c4da-4cff-9ea5-df3664ace7ef\") " Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.931040 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities" (OuterVolumeSpecName: "utilities") pod "033f09ef-c4da-4cff-9ea5-df3664ace7ef" (UID: "033f09ef-c4da-4cff-9ea5-df3664ace7ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.938520 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859" (OuterVolumeSpecName: "kube-api-access-hh859") pod "033f09ef-c4da-4cff-9ea5-df3664ace7ef" (UID: "033f09ef-c4da-4cff-9ea5-df3664ace7ef"). InnerVolumeSpecName "kube-api-access-hh859". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:36:12 crc kubenswrapper[4896]: I0126 17:36:12.998325 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "033f09ef-c4da-4cff-9ea5-df3664ace7ef" (UID: "033f09ef-c4da-4cff-9ea5-df3664ace7ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.034597 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.034663 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh859\" (UniqueName: \"kubernetes.io/projected/033f09ef-c4da-4cff-9ea5-df3664ace7ef-kube-api-access-hh859\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.034676 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/033f09ef-c4da-4cff-9ea5-df3664ace7ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.216620 4896 generic.go:334] "Generic (PLEG): container finished" podID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerID="049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd" exitCode=0 Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.216668 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerDied","Data":"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd"} Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.216705 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwdw6" event={"ID":"033f09ef-c4da-4cff-9ea5-df3664ace7ef","Type":"ContainerDied","Data":"5a228ec7fc3b6437160dcb203f8c3c5eccc82c2556cdb82f9e6c6fb4250ec216"} Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.216731 4896 scope.go:117] "RemoveContainer" containerID="049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.217090 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwdw6" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.254925 4896 scope.go:117] "RemoveContainer" containerID="ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.262204 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.275270 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zwdw6"] Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.285056 4896 scope.go:117] "RemoveContainer" containerID="5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.340668 4896 scope.go:117] "RemoveContainer" containerID="049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd" Jan 26 17:36:13 crc kubenswrapper[4896]: E0126 17:36:13.341222 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd\": container with ID starting with 049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd not found: ID does not exist" containerID="049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.341252 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd"} err="failed to get container status \"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd\": rpc error: code = NotFound desc = could not find container \"049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd\": container with ID starting with 049ffc897ccc53904d682979a71fa927081a2c6ad81daf873ea4cfe2d70941fd not found: ID does not exist" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.341278 4896 scope.go:117] "RemoveContainer" containerID="ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375" Jan 26 17:36:13 crc kubenswrapper[4896]: E0126 17:36:13.341816 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375\": container with ID starting with ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375 not found: ID does not exist" containerID="ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.341864 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375"} err="failed to get container status \"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375\": rpc error: code = NotFound desc = could not find container \"ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375\": container with ID starting with ae0ce16d145d23c1ff18358f34c04610f0aca07d56e1375e18212b85ab2cf375 not found: ID does not exist" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.341930 4896 scope.go:117] "RemoveContainer" containerID="5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec" Jan 26 17:36:13 crc kubenswrapper[4896]: E0126 17:36:13.342254 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec\": container with ID starting with 5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec not found: ID does not exist" containerID="5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec" Jan 26 17:36:13 crc kubenswrapper[4896]: I0126 17:36:13.342288 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec"} err="failed to get container status \"5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec\": rpc error: code = NotFound desc = could not find container \"5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec\": container with ID starting with 5a4255528495954630a2b8133e6393638fbd177ea3a50653dc6de2f3143862ec not found: ID does not exist" Jan 26 17:36:14 crc kubenswrapper[4896]: I0126 17:36:14.787397 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" path="/var/lib/kubelet/pods/033f09ef-c4da-4cff-9ea5-df3664ace7ef/volumes" Jan 26 17:36:20 crc kubenswrapper[4896]: I0126 17:36:20.501681 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:20 crc kubenswrapper[4896]: I0126 17:36:20.568958 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:21 crc kubenswrapper[4896]: I0126 17:36:21.376690 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.346159 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kkm6l" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="registry-server" containerID="cri-o://c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731" gracePeriod=2 Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.885006 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.935570 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities\") pod \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.935742 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shfss\" (UniqueName: \"kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss\") pod \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.935818 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content\") pod \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\" (UID: \"2d975cf1-37a2-4979-8c1e-af0140eb92f2\") " Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.936856 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities" (OuterVolumeSpecName: "utilities") pod "2d975cf1-37a2-4979-8c1e-af0140eb92f2" (UID: "2d975cf1-37a2-4979-8c1e-af0140eb92f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:22 crc kubenswrapper[4896]: I0126 17:36:22.941476 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss" (OuterVolumeSpecName: "kube-api-access-shfss") pod "2d975cf1-37a2-4979-8c1e-af0140eb92f2" (UID: "2d975cf1-37a2-4979-8c1e-af0140eb92f2"). InnerVolumeSpecName "kube-api-access-shfss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.039190 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shfss\" (UniqueName: \"kubernetes.io/projected/2d975cf1-37a2-4979-8c1e-af0140eb92f2-kube-api-access-shfss\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.039222 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.044327 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d975cf1-37a2-4979-8c1e-af0140eb92f2" (UID: "2d975cf1-37a2-4979-8c1e-af0140eb92f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.141551 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d975cf1-37a2-4979-8c1e-af0140eb92f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.364742 4896 generic.go:334] "Generic (PLEG): container finished" podID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerID="c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731" exitCode=0 Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.364799 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerDied","Data":"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731"} Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.364832 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkm6l" event={"ID":"2d975cf1-37a2-4979-8c1e-af0140eb92f2","Type":"ContainerDied","Data":"b0cbb090f7e558e9a31bb8a996dc25acd73e3f93a3eaf033bed801ad2522a21a"} Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.364852 4896 scope.go:117] "RemoveContainer" containerID="c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.365086 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkm6l" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.396103 4896 scope.go:117] "RemoveContainer" containerID="8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.411300 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.424937 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kkm6l"] Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.435703 4896 scope.go:117] "RemoveContainer" containerID="0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.485296 4896 scope.go:117] "RemoveContainer" containerID="c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731" Jan 26 17:36:23 crc kubenswrapper[4896]: E0126 17:36:23.485713 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731\": container with ID starting with c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731 not found: ID does not exist" containerID="c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.485748 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731"} err="failed to get container status \"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731\": rpc error: code = NotFound desc = could not find container \"c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731\": container with ID starting with c63a4945c4bfcb2cf80fecf75d2111c5423190eab28fee9b7a684ea781825731 not found: ID does not exist" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.485776 4896 scope.go:117] "RemoveContainer" containerID="8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a" Jan 26 17:36:23 crc kubenswrapper[4896]: E0126 17:36:23.486034 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a\": container with ID starting with 8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a not found: ID does not exist" containerID="8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.486066 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a"} err="failed to get container status \"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a\": rpc error: code = NotFound desc = could not find container \"8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a\": container with ID starting with 8ba07c0006808a48bd3ab04024de0f64c51639d2074c8b82cd0a5beeaa18be7a not found: ID does not exist" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.486085 4896 scope.go:117] "RemoveContainer" containerID="0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad" Jan 26 17:36:23 crc kubenswrapper[4896]: E0126 17:36:23.486358 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad\": container with ID starting with 0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad not found: ID does not exist" containerID="0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad" Jan 26 17:36:23 crc kubenswrapper[4896]: I0126 17:36:23.486386 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad"} err="failed to get container status \"0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad\": rpc error: code = NotFound desc = could not find container \"0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad\": container with ID starting with 0012fe412afd46f8b0d20c9595e469ee33389494c2b4065576ebd39335a692ad not found: ID does not exist" Jan 26 17:36:24 crc kubenswrapper[4896]: I0126 17:36:24.780457 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" path="/var/lib/kubelet/pods/2d975cf1-37a2-4979-8c1e-af0140eb92f2/volumes" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.068539 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070022 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="extract-utilities" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070047 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="extract-utilities" Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070100 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070116 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070163 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="extract-content" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070176 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="extract-content" Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070211 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="extract-content" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070228 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="extract-content" Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070271 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="extract-utilities" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070282 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="extract-utilities" Jan 26 17:36:41 crc kubenswrapper[4896]: E0126 17:36:41.070305 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070317 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070742 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="033f09ef-c4da-4cff-9ea5-df3664ace7ef" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.070826 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d975cf1-37a2-4979-8c1e-af0140eb92f2" containerName="registry-server" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.074107 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.080264 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.189370 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zj59\" (UniqueName: \"kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.189494 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.189655 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.292157 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.292602 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zj59\" (UniqueName: \"kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.292769 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.293006 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.293247 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.322971 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zj59\" (UniqueName: \"kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59\") pod \"certified-operators-gscfz\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.430547 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:41 crc kubenswrapper[4896]: I0126 17:36:41.961624 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:42 crc kubenswrapper[4896]: I0126 17:36:42.666792 4896 generic.go:334] "Generic (PLEG): container finished" podID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerID="ca429cfb1a7f25f8835a210755f3d153000dfef37189647db3b9f745d8281b3d" exitCode=0 Jan 26 17:36:42 crc kubenswrapper[4896]: I0126 17:36:42.666894 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerDied","Data":"ca429cfb1a7f25f8835a210755f3d153000dfef37189647db3b9f745d8281b3d"} Jan 26 17:36:42 crc kubenswrapper[4896]: I0126 17:36:42.667342 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerStarted","Data":"26569e04b47067951fb851716533376f034763a222f2c746a291b0079c58b033"} Jan 26 17:36:45 crc kubenswrapper[4896]: I0126 17:36:45.707083 4896 generic.go:334] "Generic (PLEG): container finished" podID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerID="d89a9bd0e1e477377e7fee581cbfac86afd12c841b191b51f1a6735c00fa550c" exitCode=0 Jan 26 17:36:45 crc kubenswrapper[4896]: I0126 17:36:45.707314 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerDied","Data":"d89a9bd0e1e477377e7fee581cbfac86afd12c841b191b51f1a6735c00fa550c"} Jan 26 17:36:46 crc kubenswrapper[4896]: I0126 17:36:46.721883 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerStarted","Data":"166f695954ef282cfc65942cfbdf8553e9bd342a2e9ee04cdd1e52d84b7f5cbf"} Jan 26 17:36:46 crc kubenswrapper[4896]: I0126 17:36:46.756343 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gscfz" podStartSLOduration=2.254129899 podStartE2EDuration="5.756316674s" podCreationTimestamp="2026-01-26 17:36:41 +0000 UTC" firstStartedPulling="2026-01-26 17:36:42.669774596 +0000 UTC m=+7360.451654989" lastFinishedPulling="2026-01-26 17:36:46.171961371 +0000 UTC m=+7363.953841764" observedRunningTime="2026-01-26 17:36:46.745805487 +0000 UTC m=+7364.527685880" watchObservedRunningTime="2026-01-26 17:36:46.756316674 +0000 UTC m=+7364.538197067" Jan 26 17:36:51 crc kubenswrapper[4896]: I0126 17:36:51.431512 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:51 crc kubenswrapper[4896]: I0126 17:36:51.432288 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:51 crc kubenswrapper[4896]: I0126 17:36:51.507081 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:51 crc kubenswrapper[4896]: I0126 17:36:51.844737 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:51 crc kubenswrapper[4896]: I0126 17:36:51.911122 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:53 crc kubenswrapper[4896]: I0126 17:36:53.813699 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gscfz" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="registry-server" containerID="cri-o://166f695954ef282cfc65942cfbdf8553e9bd342a2e9ee04cdd1e52d84b7f5cbf" gracePeriod=2 Jan 26 17:36:54 crc kubenswrapper[4896]: I0126 17:36:54.829303 4896 generic.go:334] "Generic (PLEG): container finished" podID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerID="166f695954ef282cfc65942cfbdf8553e9bd342a2e9ee04cdd1e52d84b7f5cbf" exitCode=0 Jan 26 17:36:54 crc kubenswrapper[4896]: I0126 17:36:54.829369 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerDied","Data":"166f695954ef282cfc65942cfbdf8553e9bd342a2e9ee04cdd1e52d84b7f5cbf"} Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.097589 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.236471 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content\") pod \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.236643 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities\") pod \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.236707 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zj59\" (UniqueName: \"kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59\") pod \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\" (UID: \"6e95c987-dfe2-4a28-9147-9be6d89be0c9\") " Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.238072 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities" (OuterVolumeSpecName: "utilities") pod "6e95c987-dfe2-4a28-9147-9be6d89be0c9" (UID: "6e95c987-dfe2-4a28-9147-9be6d89be0c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.252927 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59" (OuterVolumeSpecName: "kube-api-access-2zj59") pod "6e95c987-dfe2-4a28-9147-9be6d89be0c9" (UID: "6e95c987-dfe2-4a28-9147-9be6d89be0c9"). InnerVolumeSpecName "kube-api-access-2zj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.296452 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e95c987-dfe2-4a28-9147-9be6d89be0c9" (UID: "6e95c987-dfe2-4a28-9147-9be6d89be0c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.340946 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.342341 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zj59\" (UniqueName: \"kubernetes.io/projected/6e95c987-dfe2-4a28-9147-9be6d89be0c9-kube-api-access-2zj59\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.342452 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e95c987-dfe2-4a28-9147-9be6d89be0c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.846330 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gscfz" event={"ID":"6e95c987-dfe2-4a28-9147-9be6d89be0c9","Type":"ContainerDied","Data":"26569e04b47067951fb851716533376f034763a222f2c746a291b0079c58b033"} Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.846395 4896 scope.go:117] "RemoveContainer" containerID="166f695954ef282cfc65942cfbdf8553e9bd342a2e9ee04cdd1e52d84b7f5cbf" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.846530 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gscfz" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.883977 4896 scope.go:117] "RemoveContainer" containerID="d89a9bd0e1e477377e7fee581cbfac86afd12c841b191b51f1a6735c00fa550c" Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.899780 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.917044 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gscfz"] Jan 26 17:36:55 crc kubenswrapper[4896]: I0126 17:36:55.919377 4896 scope.go:117] "RemoveContainer" containerID="ca429cfb1a7f25f8835a210755f3d153000dfef37189647db3b9f745d8281b3d" Jan 26 17:36:56 crc kubenswrapper[4896]: I0126 17:36:56.772942 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" path="/var/lib/kubelet/pods/6e95c987-dfe2-4a28-9147-9be6d89be0c9/volumes" Jan 26 17:38:14 crc kubenswrapper[4896]: I0126 17:38:14.949707 4896 generic.go:334] "Generic (PLEG): container finished" podID="75e8efe4-ddea-48ed-b018-c952f346b635" containerID="4b0ba393ceb3f4845476bd72ef3914428acad5903483a33d49532d3401315baf" exitCode=0 Jan 26 17:38:14 crc kubenswrapper[4896]: I0126 17:38:14.949783 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75e8efe4-ddea-48ed-b018-c952f346b635","Type":"ContainerDied","Data":"4b0ba393ceb3f4845476bd72ef3914428acad5903483a33d49532d3401315baf"} Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.446984 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600078 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600175 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfwn\" (UniqueName: \"kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600246 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600334 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600424 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600454 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600514 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600544 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.600572 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config\") pod \"75e8efe4-ddea-48ed-b018-c952f346b635\" (UID: \"75e8efe4-ddea-48ed-b018-c952f346b635\") " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.604019 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.605980 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data" (OuterVolumeSpecName: "config-data") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.610184 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.612803 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.626996 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn" (OuterVolumeSpecName: "kube-api-access-2cfwn") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "kube-api-access-2cfwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.647853 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.655168 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.669807 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.683743 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "75e8efe4-ddea-48ed-b018-c952f346b635" (UID: "75e8efe4-ddea-48ed-b018-c952f346b635"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703367 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfwn\" (UniqueName: \"kubernetes.io/projected/75e8efe4-ddea-48ed-b018-c952f346b635-kube-api-access-2cfwn\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703408 4896 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703852 4896 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703880 4896 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703893 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703905 4896 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75e8efe4-ddea-48ed-b018-c952f346b635-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703914 4896 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75e8efe4-ddea-48ed-b018-c952f346b635-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703925 4896 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.703935 4896 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75e8efe4-ddea-48ed-b018-c952f346b635-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.732950 4896 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.806559 4896 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.977477 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75e8efe4-ddea-48ed-b018-c952f346b635","Type":"ContainerDied","Data":"606a1bb56f33476625882d52b46dacf144e2df3a852d88af49feafb0221c8cdf"} Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.977556 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606a1bb56f33476625882d52b46dacf144e2df3a852d88af49feafb0221c8cdf" Jan 26 17:38:16 crc kubenswrapper[4896]: I0126 17:38:16.977565 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 17:38:18 crc kubenswrapper[4896]: I0126 17:38:18.813493 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:38:18 crc kubenswrapper[4896]: I0126 17:38:18.813555 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.848669 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 17:38:28 crc kubenswrapper[4896]: E0126 17:38:28.849790 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="registry-server" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.849803 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="registry-server" Jan 26 17:38:28 crc kubenswrapper[4896]: E0126 17:38:28.849849 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="extract-content" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.849859 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="extract-content" Jan 26 17:38:28 crc kubenswrapper[4896]: E0126 17:38:28.849873 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e8efe4-ddea-48ed-b018-c952f346b635" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.849879 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e8efe4-ddea-48ed-b018-c952f346b635" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:38:28 crc kubenswrapper[4896]: E0126 17:38:28.849899 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="extract-utilities" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.849907 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="extract-utilities" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.850141 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e95c987-dfe2-4a28-9147-9be6d89be0c9" containerName="registry-server" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.850167 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e8efe4-ddea-48ed-b018-c952f346b635" containerName="tempest-tests-tempest-tests-runner" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.851278 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.855234 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fcbdm" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.864544 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.993051 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:28 crc kubenswrapper[4896]: I0126 17:38:28.993795 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xbx\" (UniqueName: \"kubernetes.io/projected/b89004b1-a612-4e95-838c-8bfe2da0ee79-kube-api-access-l4xbx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.096943 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4xbx\" (UniqueName: \"kubernetes.io/projected/b89004b1-a612-4e95-838c-8bfe2da0ee79-kube-api-access-l4xbx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.097191 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.098675 4896 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.128942 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4xbx\" (UniqueName: \"kubernetes.io/projected/b89004b1-a612-4e95-838c-8bfe2da0ee79-kube-api-access-l4xbx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.185797 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b89004b1-a612-4e95-838c-8bfe2da0ee79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:29 crc kubenswrapper[4896]: I0126 17:38:29.481478 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 17:38:30 crc kubenswrapper[4896]: I0126 17:38:30.013491 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 17:38:30 crc kubenswrapper[4896]: I0126 17:38:30.126327 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b89004b1-a612-4e95-838c-8bfe2da0ee79","Type":"ContainerStarted","Data":"865ac72be861b89d1973069947cef08f1a3b8711b83e8ab37e2a0e468a39169a"} Jan 26 17:38:32 crc kubenswrapper[4896]: I0126 17:38:32.147195 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b89004b1-a612-4e95-838c-8bfe2da0ee79","Type":"ContainerStarted","Data":"2cc2eac89a7c898519c7931e7c767fc91bab1a337f5d65cdf03c0bd1f027f9e0"} Jan 26 17:38:32 crc kubenswrapper[4896]: I0126 17:38:32.173209 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.326314273 podStartE2EDuration="4.173125222s" podCreationTimestamp="2026-01-26 17:38:28 +0000 UTC" firstStartedPulling="2026-01-26 17:38:30.014215201 +0000 UTC m=+7467.796095594" lastFinishedPulling="2026-01-26 17:38:30.86102615 +0000 UTC m=+7468.642906543" observedRunningTime="2026-01-26 17:38:32.160100584 +0000 UTC m=+7469.941980997" watchObservedRunningTime="2026-01-26 17:38:32.173125222 +0000 UTC m=+7469.955005605" Jan 26 17:38:48 crc kubenswrapper[4896]: I0126 17:38:48.813698 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:38:48 crc kubenswrapper[4896]: I0126 17:38:48.814462 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:39:18 crc kubenswrapper[4896]: I0126 17:39:18.813341 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:39:18 crc kubenswrapper[4896]: I0126 17:39:18.814169 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:39:18 crc kubenswrapper[4896]: I0126 17:39:18.814230 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:39:18 crc kubenswrapper[4896]: I0126 17:39:18.815529 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:39:18 crc kubenswrapper[4896]: I0126 17:39:18.815620 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" gracePeriod=600 Jan 26 17:39:18 crc kubenswrapper[4896]: E0126 17:39:18.955068 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:39:19 crc kubenswrapper[4896]: I0126 17:39:19.139690 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" exitCode=0 Jan 26 17:39:19 crc kubenswrapper[4896]: I0126 17:39:19.139761 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca"} Jan 26 17:39:19 crc kubenswrapper[4896]: I0126 17:39:19.139826 4896 scope.go:117] "RemoveContainer" containerID="a93d8c42b06112bbc3b78792eeb9a9c95dac4e5ffd343806b2e624b121a3dc9f" Jan 26 17:39:19 crc kubenswrapper[4896]: I0126 17:39:19.140818 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:39:19 crc kubenswrapper[4896]: E0126 17:39:19.141479 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.150152 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5rqvl/must-gather-hf4bl"] Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.154571 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.169676 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5rqvl"/"openshift-service-ca.crt" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.217146 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5rqvl"/"kube-root-ca.crt" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.301863 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5rqvl/must-gather-hf4bl"] Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.347451 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd65j\" (UniqueName: \"kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.347555 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.449557 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd65j\" (UniqueName: \"kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.449687 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.450123 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.515174 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd65j\" (UniqueName: \"kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j\") pod \"must-gather-hf4bl\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:21 crc kubenswrapper[4896]: I0126 17:39:21.781268 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:39:22 crc kubenswrapper[4896]: I0126 17:39:22.349620 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5rqvl/must-gather-hf4bl"] Jan 26 17:39:22 crc kubenswrapper[4896]: I0126 17:39:22.361893 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:39:23 crc kubenswrapper[4896]: I0126 17:39:23.191048 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" event={"ID":"d5fa672b-48fc-4b57-80ce-92eb620c3615","Type":"ContainerStarted","Data":"e858a4126a74313211232b1370af614bdfdbed73fe6d23646603637a02634189"} Jan 26 17:39:31 crc kubenswrapper[4896]: I0126 17:39:31.760814 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:39:31 crc kubenswrapper[4896]: E0126 17:39:31.761563 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:39:32 crc kubenswrapper[4896]: I0126 17:39:32.352389 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" event={"ID":"d5fa672b-48fc-4b57-80ce-92eb620c3615","Type":"ContainerStarted","Data":"b505967839e4d9e9f3bb02619a76f87b2b3298fb8f9baa74703235cd9403d3b2"} Jan 26 17:39:32 crc kubenswrapper[4896]: I0126 17:39:32.353079 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" event={"ID":"d5fa672b-48fc-4b57-80ce-92eb620c3615","Type":"ContainerStarted","Data":"bbc92e0ab9f65fdd5508141481c9845888013d808145dc392f8c46928c165b76"} Jan 26 17:39:32 crc kubenswrapper[4896]: I0126 17:39:32.382088 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" podStartSLOduration=3.09290588 podStartE2EDuration="12.382066727s" podCreationTimestamp="2026-01-26 17:39:20 +0000 UTC" firstStartedPulling="2026-01-26 17:39:22.36181573 +0000 UTC m=+7520.143696123" lastFinishedPulling="2026-01-26 17:39:31.650976587 +0000 UTC m=+7529.432856970" observedRunningTime="2026-01-26 17:39:32.376732796 +0000 UTC m=+7530.158613179" watchObservedRunningTime="2026-01-26 17:39:32.382066727 +0000 UTC m=+7530.163947130" Jan 26 17:39:37 crc kubenswrapper[4896]: E0126 17:39:37.432921 4896 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:51082->38.102.83.154:40761: write tcp 38.102.83.154:51082->38.102.83.154:40761: write: broken pipe Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.492986 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-cf8gn"] Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.495504 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.498499 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5rqvl"/"default-dockercfg-kjnmf" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.517541 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrn7\" (UniqueName: \"kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.518066 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.711210 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.711382 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrn7\" (UniqueName: \"kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.714152 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.787077 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrn7\" (UniqueName: \"kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7\") pod \"crc-debug-cf8gn\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.816384 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:39:41 crc kubenswrapper[4896]: I0126 17:39:41.904645 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" event={"ID":"bb7bce1e-d872-4187-bd4a-0b8439274962","Type":"ContainerStarted","Data":"ce805bc8cd58926a6acfee249f2ff9b6ad712d4d3323311579b65bea927a0521"} Jan 26 17:39:44 crc kubenswrapper[4896]: I0126 17:39:44.759940 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:39:44 crc kubenswrapper[4896]: E0126 17:39:44.760854 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:39:58 crc kubenswrapper[4896]: I0126 17:39:58.178986 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" event={"ID":"bb7bce1e-d872-4187-bd4a-0b8439274962","Type":"ContainerStarted","Data":"5999f4ac3f3b5df95d62f9049dedbba4921bd8d43502f482d053948d1643e8b4"} Jan 26 17:39:58 crc kubenswrapper[4896]: I0126 17:39:58.203154 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" podStartSLOduration=2.029169145 podStartE2EDuration="17.203134405s" podCreationTimestamp="2026-01-26 17:39:41 +0000 UTC" firstStartedPulling="2026-01-26 17:39:41.867168782 +0000 UTC m=+7539.649049175" lastFinishedPulling="2026-01-26 17:39:57.041134042 +0000 UTC m=+7554.823014435" observedRunningTime="2026-01-26 17:39:58.196332008 +0000 UTC m=+7555.978212391" watchObservedRunningTime="2026-01-26 17:39:58.203134405 +0000 UTC m=+7555.985014798" Jan 26 17:39:58 crc kubenswrapper[4896]: I0126 17:39:58.761262 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:39:58 crc kubenswrapper[4896]: E0126 17:39:58.761919 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:40:13 crc kubenswrapper[4896]: I0126 17:40:13.759312 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:40:13 crc kubenswrapper[4896]: E0126 17:40:13.760063 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:40:26 crc kubenswrapper[4896]: I0126 17:40:26.760394 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:40:26 crc kubenswrapper[4896]: E0126 17:40:26.761551 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:40:39 crc kubenswrapper[4896]: I0126 17:40:39.759755 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:40:39 crc kubenswrapper[4896]: E0126 17:40:39.760934 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:40:50 crc kubenswrapper[4896]: I0126 17:40:50.962503 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:40:50 crc kubenswrapper[4896]: E0126 17:40:50.979570 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:40:58 crc kubenswrapper[4896]: I0126 17:40:58.747736 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" event={"ID":"bb7bce1e-d872-4187-bd4a-0b8439274962","Type":"ContainerDied","Data":"5999f4ac3f3b5df95d62f9049dedbba4921bd8d43502f482d053948d1643e8b4"} Jan 26 17:40:58 crc kubenswrapper[4896]: I0126 17:40:58.748592 4896 generic.go:334] "Generic (PLEG): container finished" podID="bb7bce1e-d872-4187-bd4a-0b8439274962" containerID="5999f4ac3f3b5df95d62f9049dedbba4921bd8d43502f482d053948d1643e8b4" exitCode=0 Jan 26 17:40:59 crc kubenswrapper[4896]: I0126 17:40:59.909828 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:40:59 crc kubenswrapper[4896]: I0126 17:40:59.950459 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-cf8gn"] Jan 26 17:40:59 crc kubenswrapper[4896]: I0126 17:40:59.961671 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-cf8gn"] Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.064017 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host\") pod \"bb7bce1e-d872-4187-bd4a-0b8439274962\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.064194 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmrn7\" (UniqueName: \"kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7\") pod \"bb7bce1e-d872-4187-bd4a-0b8439274962\" (UID: \"bb7bce1e-d872-4187-bd4a-0b8439274962\") " Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.064979 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host" (OuterVolumeSpecName: "host") pod "bb7bce1e-d872-4187-bd4a-0b8439274962" (UID: "bb7bce1e-d872-4187-bd4a-0b8439274962"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.075567 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7" (OuterVolumeSpecName: "kube-api-access-vmrn7") pod "bb7bce1e-d872-4187-bd4a-0b8439274962" (UID: "bb7bce1e-d872-4187-bd4a-0b8439274962"). InnerVolumeSpecName "kube-api-access-vmrn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.166342 4896 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb7bce1e-d872-4187-bd4a-0b8439274962-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.166841 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmrn7\" (UniqueName: \"kubernetes.io/projected/bb7bce1e-d872-4187-bd4a-0b8439274962-kube-api-access-vmrn7\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.770138 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-cf8gn" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.782342 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7bce1e-d872-4187-bd4a-0b8439274962" path="/var/lib/kubelet/pods/bb7bce1e-d872-4187-bd4a-0b8439274962/volumes" Jan 26 17:41:00 crc kubenswrapper[4896]: I0126 17:41:00.784540 4896 scope.go:117] "RemoveContainer" containerID="5999f4ac3f3b5df95d62f9049dedbba4921bd8d43502f482d053948d1643e8b4" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.175089 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-s2gfk"] Jan 26 17:41:01 crc kubenswrapper[4896]: E0126 17:41:01.177112 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7bce1e-d872-4187-bd4a-0b8439274962" containerName="container-00" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.177453 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7bce1e-d872-4187-bd4a-0b8439274962" containerName="container-00" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.211751 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7bce1e-d872-4187-bd4a-0b8439274962" containerName="container-00" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.214018 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.220638 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5rqvl"/"default-dockercfg-kjnmf" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.302507 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.302569 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sgk\" (UniqueName: \"kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.404414 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.404472 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sgk\" (UniqueName: \"kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.405000 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.424976 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sgk\" (UniqueName: \"kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk\") pod \"crc-debug-s2gfk\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.538951 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:01 crc kubenswrapper[4896]: I0126 17:41:01.784247 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" event={"ID":"09e7e5dc-0c64-4402-a736-1dce25745c7b","Type":"ContainerStarted","Data":"969ab87997e8d83f49581b5d7715ccf35b8c451a8b1bfa4e38573b47f0a65c13"} Jan 26 17:41:02 crc kubenswrapper[4896]: I0126 17:41:02.800948 4896 generic.go:334] "Generic (PLEG): container finished" podID="09e7e5dc-0c64-4402-a736-1dce25745c7b" containerID="80f4ebeb240506c2eedce506ff7215c95ca86b963932dd3af57c415e0a784ae5" exitCode=0 Jan 26 17:41:02 crc kubenswrapper[4896]: I0126 17:41:02.801009 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" event={"ID":"09e7e5dc-0c64-4402-a736-1dce25745c7b","Type":"ContainerDied","Data":"80f4ebeb240506c2eedce506ff7215c95ca86b963932dd3af57c415e0a784ae5"} Jan 26 17:41:03 crc kubenswrapper[4896]: I0126 17:41:03.997748 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.085629 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host\") pod \"09e7e5dc-0c64-4402-a736-1dce25745c7b\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.085719 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67sgk\" (UniqueName: \"kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk\") pod \"09e7e5dc-0c64-4402-a736-1dce25745c7b\" (UID: \"09e7e5dc-0c64-4402-a736-1dce25745c7b\") " Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.086233 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host" (OuterVolumeSpecName: "host") pod "09e7e5dc-0c64-4402-a736-1dce25745c7b" (UID: "09e7e5dc-0c64-4402-a736-1dce25745c7b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.086696 4896 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09e7e5dc-0c64-4402-a736-1dce25745c7b-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.093256 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk" (OuterVolumeSpecName: "kube-api-access-67sgk") pod "09e7e5dc-0c64-4402-a736-1dce25745c7b" (UID: "09e7e5dc-0c64-4402-a736-1dce25745c7b"). InnerVolumeSpecName "kube-api-access-67sgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.188484 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67sgk\" (UniqueName: \"kubernetes.io/projected/09e7e5dc-0c64-4402-a736-1dce25745c7b-kube-api-access-67sgk\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.829808 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" event={"ID":"09e7e5dc-0c64-4402-a736-1dce25745c7b","Type":"ContainerDied","Data":"969ab87997e8d83f49581b5d7715ccf35b8c451a8b1bfa4e38573b47f0a65c13"} Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.830161 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-s2gfk" Jan 26 17:41:04 crc kubenswrapper[4896]: I0126 17:41:04.830187 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="969ab87997e8d83f49581b5d7715ccf35b8c451a8b1bfa4e38573b47f0a65c13" Jan 26 17:41:05 crc kubenswrapper[4896]: I0126 17:41:05.432081 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-s2gfk"] Jan 26 17:41:05 crc kubenswrapper[4896]: I0126 17:41:05.444485 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-s2gfk"] Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.590458 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-6tzl8"] Jan 26 17:41:06 crc kubenswrapper[4896]: E0126 17:41:06.591306 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e7e5dc-0c64-4402-a736-1dce25745c7b" containerName="container-00" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.591323 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e7e5dc-0c64-4402-a736-1dce25745c7b" containerName="container-00" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.591713 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e7e5dc-0c64-4402-a736-1dce25745c7b" containerName="container-00" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.592681 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.594502 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5rqvl"/"default-dockercfg-kjnmf" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.648789 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.648944 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rn66\" (UniqueName: \"kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.751832 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.751993 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rn66\" (UniqueName: \"kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.752019 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.761016 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:41:06 crc kubenswrapper[4896]: E0126 17:41:06.761413 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.777896 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e7e5dc-0c64-4402-a736-1dce25745c7b" path="/var/lib/kubelet/pods/09e7e5dc-0c64-4402-a736-1dce25745c7b/volumes" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.788322 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rn66\" (UniqueName: \"kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66\") pod \"crc-debug-6tzl8\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:06 crc kubenswrapper[4896]: I0126 17:41:06.911977 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:07 crc kubenswrapper[4896]: I0126 17:41:07.873765 4896 generic.go:334] "Generic (PLEG): container finished" podID="5c29ee83-ba98-419b-8898-e6062394f5aa" containerID="a77363687d7d869b17c491a850baa3e08e440545e01e3312bb9d266aa7caa739" exitCode=0 Jan 26 17:41:07 crc kubenswrapper[4896]: I0126 17:41:07.873919 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" event={"ID":"5c29ee83-ba98-419b-8898-e6062394f5aa","Type":"ContainerDied","Data":"a77363687d7d869b17c491a850baa3e08e440545e01e3312bb9d266aa7caa739"} Jan 26 17:41:07 crc kubenswrapper[4896]: I0126 17:41:07.874223 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" event={"ID":"5c29ee83-ba98-419b-8898-e6062394f5aa","Type":"ContainerStarted","Data":"52ac90fd531ff828768bb715288a51312ae337a7b104e76ac0ad86ed24cceca2"} Jan 26 17:41:07 crc kubenswrapper[4896]: I0126 17:41:07.960222 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-6tzl8"] Jan 26 17:41:07 crc kubenswrapper[4896]: I0126 17:41:07.973586 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5rqvl/crc-debug-6tzl8"] Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.051188 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.129138 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host\") pod \"5c29ee83-ba98-419b-8898-e6062394f5aa\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.129297 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host" (OuterVolumeSpecName: "host") pod "5c29ee83-ba98-419b-8898-e6062394f5aa" (UID: "5c29ee83-ba98-419b-8898-e6062394f5aa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.129358 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rn66\" (UniqueName: \"kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66\") pod \"5c29ee83-ba98-419b-8898-e6062394f5aa\" (UID: \"5c29ee83-ba98-419b-8898-e6062394f5aa\") " Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.130757 4896 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c29ee83-ba98-419b-8898-e6062394f5aa-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.142430 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66" (OuterVolumeSpecName: "kube-api-access-7rn66") pod "5c29ee83-ba98-419b-8898-e6062394f5aa" (UID: "5c29ee83-ba98-419b-8898-e6062394f5aa"). InnerVolumeSpecName "kube-api-access-7rn66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.232662 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rn66\" (UniqueName: \"kubernetes.io/projected/5c29ee83-ba98-419b-8898-e6062394f5aa-kube-api-access-7rn66\") on node \"crc\" DevicePath \"\"" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.898235 4896 scope.go:117] "RemoveContainer" containerID="a77363687d7d869b17c491a850baa3e08e440545e01e3312bb9d266aa7caa739" Jan 26 17:41:09 crc kubenswrapper[4896]: I0126 17:41:09.898283 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/crc-debug-6tzl8" Jan 26 17:41:10 crc kubenswrapper[4896]: I0126 17:41:10.772739 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c29ee83-ba98-419b-8898-e6062394f5aa" path="/var/lib/kubelet/pods/5c29ee83-ba98-419b-8898-e6062394f5aa/volumes" Jan 26 17:41:21 crc kubenswrapper[4896]: I0126 17:41:21.761464 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:41:21 crc kubenswrapper[4896]: E0126 17:41:21.762568 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:41:33 crc kubenswrapper[4896]: I0126 17:41:33.760037 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:41:33 crc kubenswrapper[4896]: E0126 17:41:33.761019 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:41:35 crc kubenswrapper[4896]: I0126 17:41:35.589126 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8adb8fd6-0c45-4952-9f55-64937ba92998/aodh-api/0.log" Jan 26 17:41:35 crc kubenswrapper[4896]: I0126 17:41:35.889718 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8adb8fd6-0c45-4952-9f55-64937ba92998/aodh-evaluator/0.log" Jan 26 17:41:35 crc kubenswrapper[4896]: I0126 17:41:35.912334 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8adb8fd6-0c45-4952-9f55-64937ba92998/aodh-listener/0.log" Jan 26 17:41:35 crc kubenswrapper[4896]: I0126 17:41:35.959387 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8adb8fd6-0c45-4952-9f55-64937ba92998/aodh-notifier/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.142081 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b66574cb6-d2c7c_4207f20a-c3a4-42fe-a6d2-09314620e63e/barbican-api/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.253320 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b66574cb6-d2c7c_4207f20a-c3a4-42fe-a6d2-09314620e63e/barbican-api-log/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.354603 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-566d4946fd-fbmrv_640faf58-91b8-46d1-9956-60383f61abc2/barbican-keystone-listener/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.542776 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-566d4946fd-fbmrv_640faf58-91b8-46d1-9956-60383f61abc2/barbican-keystone-listener-log/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.612299 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-bb7d4b5f9-6jmz5_4ab5e517-a751-433e-9503-db39609aa439/barbican-worker/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.699809 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-bb7d4b5f9-6jmz5_4ab5e517-a751-433e-9503-db39609aa439/barbican-worker-log/0.log" Jan 26 17:41:36 crc kubenswrapper[4896]: I0126 17:41:36.932712 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-86p4s_53e07773-3354-4826-bcf0-41909ecb1a20/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.056345 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f20afffa-3480-40b7-a7b8-116bccafaffb/ceilometer-central-agent/1.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.530686 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f20afffa-3480-40b7-a7b8-116bccafaffb/proxy-httpd/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.584731 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f20afffa-3480-40b7-a7b8-116bccafaffb/sg-core/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.588370 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f20afffa-3480-40b7-a7b8-116bccafaffb/ceilometer-notification-agent/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.626524 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f20afffa-3480-40b7-a7b8-116bccafaffb/ceilometer-central-agent/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.853268 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8217b6eb-3002-43a0-a26e-55003835c995/cinder-api-log/0.log" Jan 26 17:41:37 crc kubenswrapper[4896]: I0126 17:41:37.929378 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8217b6eb-3002-43a0-a26e-55003835c995/cinder-api/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.163295 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d911d99d-a84d-4aa0-ae95-8a840d2822ce/cinder-scheduler/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.238477 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d911d99d-a84d-4aa0-ae95-8a840d2822ce/probe/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.266335 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rjz7r_b80b385e-edbf-441a-af52-a5a03f29d78c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.538364 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-gp6cs_b5d99662-063d-4731-8c5d-a805dc69e348/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.568967 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-p48h9_50fc9d14-ddc2-4347-a52c-498b02787bb7/init/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.870712 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n8gq7_12856652-2e85-477a-aea9-3a0c04fd7b52/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:38 crc kubenswrapper[4896]: I0126 17:41:38.927848 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-p48h9_50fc9d14-ddc2-4347-a52c-498b02787bb7/init/0.log" Jan 26 17:41:39 crc kubenswrapper[4896]: I0126 17:41:39.009851 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-p48h9_50fc9d14-ddc2-4347-a52c-498b02787bb7/dnsmasq-dns/0.log" Jan 26 17:41:39 crc kubenswrapper[4896]: I0126 17:41:39.218463 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c1c45d1-a81c-4b0d-b5ba-cac9e8704701/glance-httpd/0.log" Jan 26 17:41:39 crc kubenswrapper[4896]: I0126 17:41:39.241516 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c1c45d1-a81c-4b0d-b5ba-cac9e8704701/glance-log/0.log" Jan 26 17:41:39 crc kubenswrapper[4896]: I0126 17:41:39.513828 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_08bc5795-ac12-43ba-9d58-d4a738f0c4ed/glance-log/0.log" Jan 26 17:41:39 crc kubenswrapper[4896]: I0126 17:41:39.557649 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_08bc5795-ac12-43ba-9d58-d4a738f0c4ed/glance-httpd/0.log" Jan 26 17:41:40 crc kubenswrapper[4896]: I0126 17:41:40.348734 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-59tbx_bd3d33aa-67c7-4ba6-93ec-5ba14b9b593a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:40 crc kubenswrapper[4896]: I0126 17:41:40.554316 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-677c6d4d55-s7fl2_783f20d6-aa0a-4ecf-9dad-d33991c40591/heat-engine/0.log" Jan 26 17:41:40 crc kubenswrapper[4896]: I0126 17:41:40.676842 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-59b9f8644d-s5wzv_ec3b067e-a59e-43db-b8f6-435fc273b976/heat-api/0.log" Jan 26 17:41:40 crc kubenswrapper[4896]: I0126 17:41:40.695901 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2w2vj_69f568f0-3460-4b8e-8ffa-1f73312e7696/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.008181 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490721-dbd8t_5a30fdc4-b069-4bdf-b901-8f382050037b/keystone-cron/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.310621 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5dfbd8bb4f-kl472_6184235f-aaa1-47cc-bc3b-a0a30698cc01/heat-cfnapi/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.411706 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490781-5xj4d_7d0c2d03-9a63-45ae-b70f-bca3910ddb9b/keystone-cron/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.666691 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69569b65bc-qdnx9_75348c37-fb63-49c8-95d3-b666eb3d1086/keystone-api/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.842127 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_eb645182-c3d8-4201-ab95-a2f26151c99f/kube-state-metrics/0.log" Jan 26 17:41:41 crc kubenswrapper[4896]: I0126 17:41:41.907059 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5q8st_019cb762-55d7-4c9d-a425-fde89665ac76/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:42 crc kubenswrapper[4896]: I0126 17:41:42.003163 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-mfqsf_290731fa-7eff-41d9-bba9-b733370ac45b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:42 crc kubenswrapper[4896]: I0126 17:41:42.228803 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_42644605-6128-40f7-9fd7-84741e8b0ea9/mysqld-exporter/0.log" Jan 26 17:41:42 crc kubenswrapper[4896]: I0126 17:41:42.614076 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d4db4449-vlmh7_cf40a9b0-1e7e-43c9-afa9-571170cc8285/neutron-httpd/0.log" Jan 26 17:41:42 crc kubenswrapper[4896]: I0126 17:41:42.726360 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nmxgm_619ecbfe-4f75-416b-bbb1-01b8470e5115/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:42 crc kubenswrapper[4896]: I0126 17:41:42.731609 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54d4db4449-vlmh7_cf40a9b0-1e7e-43c9-afa9-571170cc8285/neutron-api/0.log" Jan 26 17:41:43 crc kubenswrapper[4896]: I0126 17:41:43.276614 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d2973b9b-99d5-4b8e-890e-8eb577ac52b8/nova-cell0-conductor-conductor/0.log" Jan 26 17:41:43 crc kubenswrapper[4896]: I0126 17:41:43.687721 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_def88cd3-6f9d-4df5-831c-ece4a17801ab/nova-cell1-conductor-conductor/0.log" Jan 26 17:41:43 crc kubenswrapper[4896]: I0126 17:41:43.966928 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b74aebe-ad4e-4eca-b3fb-53194ebf847a/nova-api-log/0.log" Jan 26 17:41:44 crc kubenswrapper[4896]: I0126 17:41:44.094017 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b9b06167-680e-4c53-9611-d0f91a737d9e/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 17:41:44 crc kubenswrapper[4896]: I0126 17:41:44.235133 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wmb6h_c768c99c-1655-4c81-9eea-6676fc125f3d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:44 crc kubenswrapper[4896]: I0126 17:41:44.405226 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d970e90b-294e-47eb-81eb-a5203390a465/nova-metadata-log/0.log" Jan 26 17:41:44 crc kubenswrapper[4896]: I0126 17:41:44.536997 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b74aebe-ad4e-4eca-b3fb-53194ebf847a/nova-api-api/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.076495 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_eee850ba-ed53-45e1-9ae2-ead8cdf89877/nova-scheduler-scheduler/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.095389 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7a3e4fe3-b61e-4200-acf9-9ba170d68402/mysql-bootstrap/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.240191 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7a3e4fe3-b61e-4200-acf9-9ba170d68402/mysql-bootstrap/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.391439 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7a3e4fe3-b61e-4200-acf9-9ba170d68402/galera/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.510435 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_78b988fb-f698-4b52-8771-2599b5441229/mysql-bootstrap/0.log" Jan 26 17:41:45 crc kubenswrapper[4896]: I0126 17:41:45.847339 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_78b988fb-f698-4b52-8771-2599b5441229/mysql-bootstrap/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.026030 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_78b988fb-f698-4b52-8771-2599b5441229/galera/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.169532 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5809e3c3-ef95-4db2-a2eb-16ca58b2f3e3/openstackclient/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.288079 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c9bzf_f24a2e9c-671f-48a5-a5f5-55b864b17d19/ovn-controller/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.564464 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5bgbx_f6152e32-f156-409d-99fb-13e07813a47e/openstack-network-exporter/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.747000 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlm9m_97576f52-b567-4334-844b-bd9ae73a82b7/ovsdb-server-init/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.878878 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlm9m_97576f52-b567-4334-844b-bd9ae73a82b7/ovsdb-server-init/0.log" Jan 26 17:41:46 crc kubenswrapper[4896]: I0126 17:41:46.989518 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlm9m_97576f52-b567-4334-844b-bd9ae73a82b7/ovs-vswitchd/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.004629 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlm9m_97576f52-b567-4334-844b-bd9ae73a82b7/ovsdb-server/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.337145 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8mjp2_7f0fea8f-566a-45a2-99fd-89c389143121/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.450758 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7600dca1-3435-4fcf-aab5-54c683d3ac33/openstack-network-exporter/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.524544 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7600dca1-3435-4fcf-aab5-54c683d3ac33/ovn-northd/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.834235 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d970e90b-294e-47eb-81eb-a5203390a465/nova-metadata-metadata/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.868888 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_56852f16-c116-4ec4-b22f-16952ac363b3/openstack-network-exporter/0.log" Jan 26 17:41:47 crc kubenswrapper[4896]: I0126 17:41:47.932371 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_56852f16-c116-4ec4-b22f-16952ac363b3/ovsdbserver-nb/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.082954 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_97b874a0-24c5-4c30-ae9b-33b380c5a99b/openstack-network-exporter/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.119702 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_97b874a0-24c5-4c30-ae9b-33b380c5a99b/ovsdbserver-sb/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.368600 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d5697764-wvv6n_6133f02e-6901-41cd-ac62-9450747a6d98/placement-api/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.600204 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d5697764-wvv6n_6133f02e-6901-41cd-ac62-9450747a6d98/placement-log/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.608912 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_15b95f90-b75a-43ab-9c54-acd4c3e658ab/init-config-reloader/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.753434 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_15b95f90-b75a-43ab-9c54-acd4c3e658ab/init-config-reloader/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.759630 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:41:48 crc kubenswrapper[4896]: E0126 17:41:48.759973 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.780177 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_15b95f90-b75a-43ab-9c54-acd4c3e658ab/config-reloader/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.823995 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_15b95f90-b75a-43ab-9c54-acd4c3e658ab/thanos-sidecar/0.log" Jan 26 17:41:48 crc kubenswrapper[4896]: I0126 17:41:48.832887 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_15b95f90-b75a-43ab-9c54-acd4c3e658ab/prometheus/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.046744 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0141a12a-f7a3-47cc-b0ac-7853a684fcf8/setup-container/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.347334 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0141a12a-f7a3-47cc-b0ac-7853a684fcf8/setup-container/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.359101 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6d746-38b7-4bbe-b01c-33ebe89f4195/setup-container/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.398730 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0141a12a-f7a3-47cc-b0ac-7853a684fcf8/rabbitmq/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.592489 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6d746-38b7-4bbe-b01c-33ebe89f4195/setup-container/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.640717 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6f7a7630-9c4f-4ff5-94c5-faa1cef560d0/setup-container/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.777373 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6d746-38b7-4bbe-b01c-33ebe89f4195/rabbitmq/0.log" Jan 26 17:41:49 crc kubenswrapper[4896]: I0126 17:41:49.854065 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6f7a7630-9c4f-4ff5-94c5-faa1cef560d0/setup-container/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.049421 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3cb4dd6a-0deb-4730-8b5d-590b8981433b/setup-container/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.057096 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_6f7a7630-9c4f-4ff5-94c5-faa1cef560d0/rabbitmq/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.170298 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3cb4dd6a-0deb-4730-8b5d-590b8981433b/setup-container/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.408896 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ktls4_92b2a894-7665-4c40-b5b2-94e4387b95c5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.455685 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_3cb4dd6a-0deb-4730-8b5d-590b8981433b/rabbitmq/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.564035 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vrhqm_8d223a17-39b6-4e7b-b09b-ff398113a048/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.684150 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zzzkx_131acfa1-5305-42d1-9c00-6f0193f795a8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.819617 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-f9vsx_f135c2c3-6301-42b2-a4f6-134b93bd65be/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:50 crc kubenswrapper[4896]: I0126 17:41:50.994491 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sf5rj_682cfce4-854f-4f10-99fd-a92236ede1fb/ssh-known-hosts-edpm-deployment/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.305397 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-686bd9bf85-wbdcn_c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad/proxy-server/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.447334 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-686bd9bf85-wbdcn_c3aa90f7-6de5-4c6d-b33b-df1b237bd0ad/proxy-httpd/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.449545 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bbppj_ff5abeb5-5a6e-48b2-920f-fb1a55c83023/swift-ring-rebalance/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.590074 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/account-auditor/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.690499 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/account-reaper/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.744634 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/account-replicator/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.848347 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/account-server/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.850014 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/container-auditor/0.log" Jan 26 17:41:51 crc kubenswrapper[4896]: I0126 17:41:51.990998 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/container-replicator/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.024057 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/container-server/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.174098 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/container-updater/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.225650 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-auditor/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.271124 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-expirer/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.341215 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-replicator/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.411881 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-server/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.473934 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/rsync/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.486187 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/object-updater/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.574355 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_56f3d7e7-114a-4790-ac11-1d5d191bdf40/swift-recon-cron/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.817175 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j54lh_328a79d0-c276-4dcb-812b-b2436c4031dc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:52 crc kubenswrapper[4896]: I0126 17:41:52.875373 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-rgcbf_a205991d-d9c9-4d4f-b237-9198ac546ae1/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:53 crc kubenswrapper[4896]: I0126 17:41:53.135790 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b89004b1-a612-4e95-838c-8bfe2da0ee79/test-operator-logs-container/0.log" Jan 26 17:41:53 crc kubenswrapper[4896]: I0126 17:41:53.366659 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-rwtj7_a5325724-2408-4ea5-a21b-7208a9d8a1c8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:41:53 crc kubenswrapper[4896]: I0126 17:41:53.611309 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_83b2e80c-4c60-4a3e-a9f3-0ce2af747e4f/memcached/0.log" Jan 26 17:41:53 crc kubenswrapper[4896]: I0126 17:41:53.856277 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_75e8efe4-ddea-48ed-b018-c952f346b635/tempest-tests-tempest-tests-runner/0.log" Jan 26 17:42:00 crc kubenswrapper[4896]: I0126 17:42:00.759656 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:42:00 crc kubenswrapper[4896]: E0126 17:42:00.761723 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:42:13 crc kubenswrapper[4896]: I0126 17:42:13.760169 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:42:13 crc kubenswrapper[4896]: E0126 17:42:13.761072 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:42:24 crc kubenswrapper[4896]: I0126 17:42:24.709356 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-rp5b4_c44d6ef8-c52f-4a31-8a33-1ee01d7e969a/manager/1.log" Jan 26 17:42:24 crc kubenswrapper[4896]: I0126 17:42:24.764803 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-rp5b4_c44d6ef8-c52f-4a31-8a33-1ee01d7e969a/manager/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.010838 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/util/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.175561 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/pull/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.191676 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/util/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.197551 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/pull/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.398304 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/util/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.407183 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/pull/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.430435 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c6c380c61a65ebad077917040ba7792d096c7f4fd0edf29789306bb16fszxf8_ad24da91-ff62-4407-91cb-a321d268661e/extract/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.579617 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7t46g_8c799412-6936-4161-8d4e-244bc94c0d69/manager/1.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.726149 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6wv5s_16c521f5-6f5f-43e3-a670-9f6ab6312d9c/manager/1.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.755729 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7t46g_8c799412-6936-4161-8d4e-244bc94c0d69/manager/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.883615 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6wv5s_16c521f5-6f5f-43e3-a670-9f6ab6312d9c/manager/0.log" Jan 26 17:42:25 crc kubenswrapper[4896]: I0126 17:42:25.951375 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-j92tx_f6fe08af-0b15-4be3-8473-6a983d21ebe3/manager/1.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.060988 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-j92tx_f6fe08af-0b15-4be3-8473-6a983d21ebe3/manager/0.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.187207 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-z7j4w_b3272d78-4dde-4997-9316-24a84c00f4c8/manager/1.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.321246 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-z7j4w_b3272d78-4dde-4997-9316-24a84c00f4c8/manager/0.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.392073 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-jx95g_b0480b36-40e2-426c-a1a8-e02e79fe7a17/manager/1.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.452907 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-jx95g_b0480b36-40e2-426c-a1a8-e02e79fe7a17/manager/0.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.648175 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-49kq4_03cf04a4-606b-44b9-9aee-86e4b0a8a1eb/manager/1.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.763023 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:42:26 crc kubenswrapper[4896]: E0126 17:42:26.763475 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.819318 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-948px_1c532b54-34b3-4b51-bbd3-1e3bd39d5958/manager/1.log" Jan 26 17:42:26 crc kubenswrapper[4896]: I0126 17:42:26.936258 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-948px_1c532b54-34b3-4b51-bbd3-1e3bd39d5958/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.048668 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lz2hg_fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4/manager/1.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.098989 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-49kq4_03cf04a4-606b-44b9-9aee-86e4b0a8a1eb/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.276590 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-cwcgv_a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d/manager/1.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.281568 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lz2hg_fc8a478d-ccdf-4d2b-b27f-58fde92fd7d4/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.321176 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-cwcgv_a6a0fae6-65fb-46f8-9b0a-2cbae0665e6d/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.447616 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh_8ac5298a-429c-47d6-9436-34bd2bd1fdec/manager/1.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.521112 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-fz5qh_8ac5298a-429c-47d6-9436-34bd2bd1fdec/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.636890 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-lvm6z_3eac11e1-3f7e-467c-b7f7-038d29e23848/manager/1.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.809857 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-lvm6z_3eac11e1-3f7e-467c-b7f7-038d29e23848/manager/0.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.879469 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-kvnzb_8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b/manager/1.log" Jan 26 17:42:27 crc kubenswrapper[4896]: I0126 17:42:27.980914 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-kvnzb_8e0a37ed-b8af-49ae-9c6c-ed7097f46f3b/manager/0.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.098802 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-s2bwr_61be8fa4-3ad2-4745-88ab-850db16c5707/manager/1.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.172166 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-s2bwr_61be8fa4-3ad2-4745-88ab-850db16c5707/manager/0.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.344223 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd_6434b0ee-4d33-4422-a662-3315b2f5499c/manager/1.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.368082 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854gtfjd_6434b0ee-4d33-4422-a662-3315b2f5499c/manager/0.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.559806 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8f6df5568-rvvb8_b54a446e-c064-4867-91fa-55f96ea9d87e/operator/1.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.841220 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8f6df5568-rvvb8_b54a446e-c064-4867-91fa-55f96ea9d87e/operator/0.log" Jan 26 17:42:28 crc kubenswrapper[4896]: I0126 17:42:28.947076 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d6b58b596-csnd6_f493b2ea-1515-42db-ac1c-ea1a7121e070/manager/1.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.147070 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sf56h_2480d4e9-511f-4e9a-9a73-2e10c1fa3da7/registry-server/0.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.346509 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-mjzqx_29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab/manager/1.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.536439 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-mjzqx_29afb8bf-1d53-45a3-b67c-a1ebc26aa4ab/manager/0.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.647706 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-9vwsl_7a813859-31b7-4729-865e-46c6ff663209/manager/1.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.797641 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7pgcx_bc769396-13b5-4066-b7fc-93a3f87a50ff/operator/1.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.889487 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-9vwsl_7a813859-31b7-4729-865e-46c6ff663209/manager/0.log" Jan 26 17:42:29 crc kubenswrapper[4896]: I0126 17:42:29.927875 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7pgcx_bc769396-13b5-4066-b7fc-93a3f87a50ff/operator/0.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.146145 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4sl4s_1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3/manager/1.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.270759 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4sl4s_1bf7b7e2-7b44-4a9d-aa3d-31ed21b66dc3/manager/0.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.400356 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fd4748d4d-2sttl_2496a24c-43ae-4ce4-8996-60c6e7282bfa/manager/1.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.578338 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d6b58b596-csnd6_f493b2ea-1515-42db-ac1c-ea1a7121e070/manager/0.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.593078 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-p82px_f2c6d7a1-690c-4364-a2ea-25e955a38782/manager/1.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.631024 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-p82px_f2c6d7a1-690c-4364-a2ea-25e955a38782/manager/0.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.787502 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fd4748d4d-2sttl_2496a24c-43ae-4ce4-8996-60c6e7282bfa/manager/0.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.831535 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4v8sm_b8f08a13-e22d-4147-91c2-07c51dbfb83d/manager/1.log" Jan 26 17:42:30 crc kubenswrapper[4896]: I0126 17:42:30.893695 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4v8sm_b8f08a13-e22d-4147-91c2-07c51dbfb83d/manager/0.log" Jan 26 17:42:39 crc kubenswrapper[4896]: I0126 17:42:39.760931 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:42:39 crc kubenswrapper[4896]: E0126 17:42:39.761857 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:42:54 crc kubenswrapper[4896]: I0126 17:42:54.169714 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7gnv9_65d79adb-6464-4157-924d-ffadb4ed5d16/control-plane-machine-set-operator/0.log" Jan 26 17:42:54 crc kubenswrapper[4896]: I0126 17:42:54.693181 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-66hlb_0752f58c-f532-48fb-b192-30c2f8614059/machine-api-operator/0.log" Jan 26 17:42:54 crc kubenswrapper[4896]: I0126 17:42:54.720012 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-66hlb_0752f58c-f532-48fb-b192-30c2f8614059/kube-rbac-proxy/0.log" Jan 26 17:42:54 crc kubenswrapper[4896]: I0126 17:42:54.760448 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:42:54 crc kubenswrapper[4896]: E0126 17:42:54.760789 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:43:06 crc kubenswrapper[4896]: I0126 17:43:06.759778 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:43:06 crc kubenswrapper[4896]: E0126 17:43:06.760799 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:43:08 crc kubenswrapper[4896]: I0126 17:43:08.917081 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8drkb_d971d7ab-8017-45d5-9802-17b6b699464e/cert-manager-controller/0.log" Jan 26 17:43:08 crc kubenswrapper[4896]: I0126 17:43:08.990776 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5rx5s_8e6d49e2-282c-476d-8ce8-8bff3b7fbc6c/cert-manager-cainjector/0.log" Jan 26 17:43:09 crc kubenswrapper[4896]: I0126 17:43:09.044016 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-k7ctr_6b19c675-ac2e-4855-8368-79f9812f6a86/cert-manager-webhook/0.log" Jan 26 17:43:21 crc kubenswrapper[4896]: I0126 17:43:21.759561 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:43:21 crc kubenswrapper[4896]: E0126 17:43:21.760709 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:43:23 crc kubenswrapper[4896]: I0126 17:43:23.894016 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-jlhg6_0613cf2d-b75b-49d7-b022-7783032c5977/nmstate-console-plugin/0.log" Jan 26 17:43:24 crc kubenswrapper[4896]: I0126 17:43:24.140308 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gb4lb_c26d8ef0-b0d7-4095-8dd2-94aa365eb295/nmstate-handler/0.log" Jan 26 17:43:24 crc kubenswrapper[4896]: I0126 17:43:24.152946 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-zj6jl_f40b3348-54e4-43f5-9036-cbb48e93b039/kube-rbac-proxy/0.log" Jan 26 17:43:24 crc kubenswrapper[4896]: I0126 17:43:24.301476 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-zj6jl_f40b3348-54e4-43f5-9036-cbb48e93b039/nmstate-metrics/0.log" Jan 26 17:43:24 crc kubenswrapper[4896]: I0126 17:43:24.356470 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gw2nc_f086c2d2-aa04-4a67-a6ee-0156173683f9/nmstate-operator/0.log" Jan 26 17:43:24 crc kubenswrapper[4896]: I0126 17:43:24.568839 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-ktsph_00f1e33c-4322-42be-b120-18e0bbad3318/nmstate-webhook/0.log" Jan 26 17:43:34 crc kubenswrapper[4896]: I0126 17:43:34.762354 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:43:34 crc kubenswrapper[4896]: E0126 17:43:34.763244 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:43:40 crc kubenswrapper[4896]: I0126 17:43:40.187117 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/manager/1.log" Jan 26 17:43:40 crc kubenswrapper[4896]: I0126 17:43:40.206217 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/kube-rbac-proxy/0.log" Jan 26 17:43:40 crc kubenswrapper[4896]: I0126 17:43:40.454309 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/manager/0.log" Jan 26 17:43:46 crc kubenswrapper[4896]: I0126 17:43:46.760611 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:43:46 crc kubenswrapper[4896]: E0126 17:43:46.761575 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:43:57 crc kubenswrapper[4896]: I0126 17:43:57.111520 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sjfrn_d6db188e-bd3c-49e5-800c-2f6706ca8b45/prometheus-operator/0.log" Jan 26 17:43:57 crc kubenswrapper[4896]: I0126 17:43:57.348678 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_482717fd-6a21-44de-a4d1-e08d5324552b/prometheus-operator-admission-webhook/0.log" Jan 26 17:43:57 crc kubenswrapper[4896]: I0126 17:43:57.760761 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212/prometheus-operator-admission-webhook/0.log" Jan 26 17:43:57 crc kubenswrapper[4896]: I0126 17:43:57.929279 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-b58h7_39e44697-1997-402b-939f-641cb2f74176/operator/0.log" Jan 26 17:43:58 crc kubenswrapper[4896]: I0126 17:43:58.062676 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-4s75s_561e12b6-a1fb-407f-ae57-6a28f00f9093/observability-ui-dashboards/0.log" Jan 26 17:43:58 crc kubenswrapper[4896]: I0126 17:43:58.154325 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-s6xcx_4d40f6e6-fe99-4a02-b499-83c0b6a61706/perses-operator/0.log" Jan 26 17:43:58 crc kubenswrapper[4896]: I0126 17:43:58.760023 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:43:58 crc kubenswrapper[4896]: E0126 17:43:58.760612 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:44:12 crc kubenswrapper[4896]: I0126 17:44:12.769138 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:44:12 crc kubenswrapper[4896]: E0126 17:44:12.770064 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:44:15 crc kubenswrapper[4896]: I0126 17:44:15.517181 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-mllkq_248dd691-612f-4480-8673-4446257df703/cluster-logging-operator/0.log" Jan 26 17:44:15 crc kubenswrapper[4896]: I0126 17:44:15.713231 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-czl2w_068fdddb-b48c-48f6-ab7a-a9e1d473aaa5/collector/0.log" Jan 26 17:44:15 crc kubenswrapper[4896]: I0126 17:44:15.803952 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_94b020cc-3ced-46e2-89c9-1294e89989da/loki-compactor/0.log" Jan 26 17:44:15 crc kubenswrapper[4896]: I0126 17:44:15.949238 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-wxx4s_790beb3d-3eed-4fef-849d-84a13c17f4a7/loki-distributor/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.024424 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-785c7cc549-fc8ss_92bc77c6-54c0-4ab0-8abf-71fef00ec66d/gateway/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.093304 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-785c7cc549-fc8ss_92bc77c6-54c0-4ab0-8abf-71fef00ec66d/opa/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.212270 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-785c7cc549-thnm6_9ef5e225-61d8-4ca8-9bc1-43e583ad71be/gateway/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.323180 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-785c7cc549-thnm6_9ef5e225-61d8-4ca8-9bc1-43e583ad71be/opa/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.506779 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7efe8082-4b9b-49e6-a79c-0ca2e0f5bc24/loki-index-gateway/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.669223 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_5002bb81-4c92-43b5-93a3-e0986702b713/loki-ingester/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.740489 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-lxv2v_39d4db55-bf77-4948-a36b-4e0d4bf056e8/loki-querier/0.log" Jan 26 17:44:16 crc kubenswrapper[4896]: I0126 17:44:16.881763 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-ds2pd_b0989bb6-640e-4e54-9dc7-940798b9847f/loki-query-frontend/0.log" Jan 26 17:44:26 crc kubenswrapper[4896]: I0126 17:44:26.762354 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:44:27 crc kubenswrapper[4896]: I0126 17:44:27.647989 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d"} Jan 26 17:44:32 crc kubenswrapper[4896]: I0126 17:44:32.845629 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nqbzj_2502287e-cdc6-4a66-8f39-278e9560c7bf/kube-rbac-proxy/0.log" Jan 26 17:44:32 crc kubenswrapper[4896]: I0126 17:44:32.897562 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-nqbzj_2502287e-cdc6-4a66-8f39-278e9560c7bf/controller/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.054723 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-frr-files/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.266820 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-frr-files/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.305876 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-metrics/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.346728 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-reloader/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.430352 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-reloader/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.622242 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-frr-files/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.672430 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-reloader/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.679652 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-metrics/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.682740 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-metrics/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.912142 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-reloader/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.935170 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/controller/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.956959 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-metrics/0.log" Jan 26 17:44:33 crc kubenswrapper[4896]: I0126 17:44:33.971547 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/cp-frr-files/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.166276 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/frr-metrics/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.212510 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/kube-rbac-proxy-frr/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.246057 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/kube-rbac-proxy/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.430889 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/reloader/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.544660 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-86mhf_d18b5dee-5e82-4bf6-baf3-b3bc539da480/frr-k8s-webhook-server/0.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.792421 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c8c8d84d-97894_8a59a62f-3748-43b7-baa0-cd121242caea/manager/1.log" Jan 26 17:44:34 crc kubenswrapper[4896]: I0126 17:44:34.834819 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c8c8d84d-97894_8a59a62f-3748-43b7-baa0-cd121242caea/manager/0.log" Jan 26 17:44:35 crc kubenswrapper[4896]: I0126 17:44:35.030901 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f478b8cc-mpsb2_9054b98a-1821-4a98-881a-37475dea15e9/webhook-server/0.log" Jan 26 17:44:35 crc kubenswrapper[4896]: I0126 17:44:35.243607 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tkm4l_6c994fcb-b747-4440-a355-e89ada0aad52/kube-rbac-proxy/0.log" Jan 26 17:44:36 crc kubenswrapper[4896]: I0126 17:44:36.134447 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tkm4l_6c994fcb-b747-4440-a355-e89ada0aad52/speaker/0.log" Jan 26 17:44:36 crc kubenswrapper[4896]: I0126 17:44:36.422568 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-klnvj_07b177bf-083a-4714-bf1d-c07315a750d7/frr/0.log" Jan 26 17:44:49 crc kubenswrapper[4896]: I0126 17:44:49.989863 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/util/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.215049 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/pull/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.218236 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/util/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.265318 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/pull/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.451727 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/util/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.460958 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/pull/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.529078 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2xhwf7_78b1f724-4f81-4571-8f81-9170eb54e5d1/extract/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.680358 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/util/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.833048 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/util/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.845958 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/pull/0.log" Jan 26 17:44:50 crc kubenswrapper[4896]: I0126 17:44:50.868384 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/pull/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.008425 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/util/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.053967 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/pull/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.069760 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckd749_67c7dab8-b04f-415d-b859-138fa4c24117/extract/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.228808 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/util/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.370112 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/util/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.418681 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/pull/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.425320 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/pull/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.629431 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/util/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.636339 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/pull/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.663022 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bqwv6q_c5c39f79-1a4d-45f3-96b2-e409562cdf14/extract/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.858433 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/util/0.log" Jan 26 17:44:51 crc kubenswrapper[4896]: I0126 17:44:51.989603 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.037682 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/util/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.039638 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.247564 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.262031 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/util/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.331945 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x94h2_485d26e3-9bf1-4f92-92be-531b0ce1234e/extract/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.456308 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/util/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.640619 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.643572 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.693841 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/util/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.862054 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/util/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.872754 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/pull/0.log" Jan 26 17:44:52 crc kubenswrapper[4896]: I0126 17:44:52.873133 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt8bs_b5a125e8-a2db-49bf-b882-8c26600a229b/extract/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.019422 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-utilities/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.200007 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-utilities/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.232239 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-content/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.259344 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-content/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.458131 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-content/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.464117 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/extract-utilities/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.682984 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6s2j9_caa51786-7a20-4cf3-9d57-bc54eb8ca9e9/registry-server/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.727245 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-utilities/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.879957 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-utilities/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.894254 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-content/0.log" Jan 26 17:44:53 crc kubenswrapper[4896]: I0126 17:44:53.924334 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-content/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.080677 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-content/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.085802 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/extract-utilities/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.183773 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gtg7d_22808cdf-7c01-491f-b3f4-d641898edf7b/marketplace-operator/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.321698 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-utilities/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.561674 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-content/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.637058 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-content/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.637866 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-utilities/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.821552 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-utilities/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.835691 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/extract-content/0.log" Jan 26 17:44:54 crc kubenswrapper[4896]: I0126 17:44:54.996016 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bj7vn_3a23765e-261d-49cc-a21f-5548e62c4b41/registry-server/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.055293 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-utilities/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.220175 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-utilities/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.286979 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-content/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.344010 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-content/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.571151 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-content/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.593954 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/extract-utilities/0.log" Jan 26 17:44:55 crc kubenswrapper[4896]: I0126 17:44:55.818758 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ddlvl_3890206d-cbb0-4910-ab30-f4f9c66d28f8/registry-server/0.log" Jan 26 17:44:56 crc kubenswrapper[4896]: I0126 17:44:56.821798 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hltsz_f38ce830-9bcb-49de-b024-23cb889289c0/registry-server/0.log" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.237596 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr"] Jan 26 17:45:00 crc kubenswrapper[4896]: E0126 17:45:00.239742 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c29ee83-ba98-419b-8898-e6062394f5aa" containerName="container-00" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.239826 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c29ee83-ba98-419b-8898-e6062394f5aa" containerName="container-00" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.240144 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c29ee83-ba98-419b-8898-e6062394f5aa" containerName="container-00" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.241105 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.244860 4896 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.245761 4896 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.288301 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr"] Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.317524 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4cq\" (UniqueName: \"kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.317633 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.317724 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.420233 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.420421 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4cq\" (UniqueName: \"kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.420487 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.422056 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.428872 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.438231 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4cq\" (UniqueName: \"kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq\") pod \"collect-profiles-29490825-46xqr\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:00 crc kubenswrapper[4896]: I0126 17:45:00.565968 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:01 crc kubenswrapper[4896]: I0126 17:45:01.202264 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr"] Jan 26 17:45:01 crc kubenswrapper[4896]: I0126 17:45:01.897560 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:01 crc kubenswrapper[4896]: I0126 17:45:01.903115 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:01 crc kubenswrapper[4896]: I0126 17:45:01.936257 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.043195 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" event={"ID":"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26","Type":"ContainerStarted","Data":"40cfbdfa8fa457dab91386f87780a47b4a2373cca6bf4ec3ef9ce51ce60aa24d"} Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.043459 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" event={"ID":"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26","Type":"ContainerStarted","Data":"3b89a0104decdcaf1c9c309a6e8ce32ca0158d5ab798fcb2b7d4d9f00b7d5801"} Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.059910 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" podStartSLOduration=2.059890648 podStartE2EDuration="2.059890648s" podCreationTimestamp="2026-01-26 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:45:02.059272423 +0000 UTC m=+7859.841152816" watchObservedRunningTime="2026-01-26 17:45:02.059890648 +0000 UTC m=+7859.841771041" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.075172 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.075261 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.075303 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw9s\" (UniqueName: \"kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.178073 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.178185 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.178214 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw9s\" (UniqueName: \"kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.179352 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.179423 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.201213 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw9s\" (UniqueName: \"kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s\") pod \"redhat-marketplace-g6lrz\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.225635 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:02 crc kubenswrapper[4896]: I0126 17:45:02.750922 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:03 crc kubenswrapper[4896]: I0126 17:45:03.055925 4896 generic.go:334] "Generic (PLEG): container finished" podID="f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" containerID="40cfbdfa8fa457dab91386f87780a47b4a2373cca6bf4ec3ef9ce51ce60aa24d" exitCode=0 Jan 26 17:45:03 crc kubenswrapper[4896]: I0126 17:45:03.056405 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" event={"ID":"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26","Type":"ContainerDied","Data":"40cfbdfa8fa457dab91386f87780a47b4a2373cca6bf4ec3ef9ce51ce60aa24d"} Jan 26 17:45:03 crc kubenswrapper[4896]: I0126 17:45:03.058176 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerStarted","Data":"0111cca54ed28c7a751b20f71a335668a59005d76bcc26650467a5880e31b131"} Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.075008 4896 generic.go:334] "Generic (PLEG): container finished" podID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerID="ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a" exitCode=0 Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.075141 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerDied","Data":"ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a"} Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.081928 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.592532 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.777805 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume\") pod \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.778336 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx4cq\" (UniqueName: \"kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq\") pod \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.778549 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume\") pod \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\" (UID: \"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26\") " Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.779498 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" (UID: "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.786783 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" (UID: "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.786851 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq" (OuterVolumeSpecName: "kube-api-access-sx4cq") pod "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" (UID: "f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26"). InnerVolumeSpecName "kube-api-access-sx4cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.884128 4896 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.884163 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx4cq\" (UniqueName: \"kubernetes.io/projected/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-kube-api-access-sx4cq\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:04 crc kubenswrapper[4896]: I0126 17:45:04.884186 4896 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:05 crc kubenswrapper[4896]: I0126 17:45:05.090804 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" event={"ID":"f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26","Type":"ContainerDied","Data":"3b89a0104decdcaf1c9c309a6e8ce32ca0158d5ab798fcb2b7d4d9f00b7d5801"} Jan 26 17:45:05 crc kubenswrapper[4896]: I0126 17:45:05.090860 4896 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b89a0104decdcaf1c9c309a6e8ce32ca0158d5ab798fcb2b7d4d9f00b7d5801" Jan 26 17:45:05 crc kubenswrapper[4896]: I0126 17:45:05.090892 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-46xqr" Jan 26 17:45:05 crc kubenswrapper[4896]: I0126 17:45:05.163609 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz"] Jan 26 17:45:05 crc kubenswrapper[4896]: I0126 17:45:05.177756 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-96mgz"] Jan 26 17:45:06 crc kubenswrapper[4896]: I0126 17:45:06.103713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerStarted","Data":"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060"} Jan 26 17:45:06 crc kubenswrapper[4896]: I0126 17:45:06.998378 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cbba2f-5285-46a0-8655-59ebc7ad2f21" path="/var/lib/kubelet/pods/b7cbba2f-5285-46a0-8655-59ebc7ad2f21/volumes" Jan 26 17:45:08 crc kubenswrapper[4896]: I0126 17:45:08.131269 4896 generic.go:334] "Generic (PLEG): container finished" podID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerID="4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060" exitCode=0 Jan 26 17:45:08 crc kubenswrapper[4896]: I0126 17:45:08.131324 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerDied","Data":"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060"} Jan 26 17:45:09 crc kubenswrapper[4896]: I0126 17:45:09.145054 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerStarted","Data":"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b"} Jan 26 17:45:09 crc kubenswrapper[4896]: I0126 17:45:09.172218 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6lrz" podStartSLOduration=3.718214165 podStartE2EDuration="8.172191705s" podCreationTimestamp="2026-01-26 17:45:01 +0000 UTC" firstStartedPulling="2026-01-26 17:45:04.07894901 +0000 UTC m=+7861.860829403" lastFinishedPulling="2026-01-26 17:45:08.53292652 +0000 UTC m=+7866.314806943" observedRunningTime="2026-01-26 17:45:09.167307826 +0000 UTC m=+7866.949188229" watchObservedRunningTime="2026-01-26 17:45:09.172191705 +0000 UTC m=+7866.954072118" Jan 26 17:45:10 crc kubenswrapper[4896]: I0126 17:45:10.614916 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sjfrn_d6db188e-bd3c-49e5-800c-2f6706ca8b45/prometheus-operator/0.log" Jan 26 17:45:10 crc kubenswrapper[4896]: I0126 17:45:10.659090 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c9d575b6d-hlpnm_482717fd-6a21-44de-a4d1-e08d5324552b/prometheus-operator-admission-webhook/0.log" Jan 26 17:45:10 crc kubenswrapper[4896]: I0126 17:45:10.704930 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c9d575b6d-mr4tb_1d1c2692-33ed-45a8-9fed-a2c9eb1b5212/prometheus-operator-admission-webhook/0.log" Jan 26 17:45:10 crc kubenswrapper[4896]: I0126 17:45:10.966433 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-b58h7_39e44697-1997-402b-939f-641cb2f74176/operator/0.log" Jan 26 17:45:10 crc kubenswrapper[4896]: I0126 17:45:10.981483 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-4s75s_561e12b6-a1fb-407f-ae57-6a28f00f9093/observability-ui-dashboards/0.log" Jan 26 17:45:11 crc kubenswrapper[4896]: I0126 17:45:11.014978 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-s6xcx_4d40f6e6-fe99-4a02-b499-83c0b6a61706/perses-operator/0.log" Jan 26 17:45:12 crc kubenswrapper[4896]: I0126 17:45:12.226428 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:12 crc kubenswrapper[4896]: I0126 17:45:12.227891 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:13 crc kubenswrapper[4896]: I0126 17:45:13.298169 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g6lrz" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="registry-server" probeResult="failure" output=< Jan 26 17:45:13 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:45:13 crc kubenswrapper[4896]: > Jan 26 17:45:22 crc kubenswrapper[4896]: I0126 17:45:22.293638 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:22 crc kubenswrapper[4896]: I0126 17:45:22.364263 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:23 crc kubenswrapper[4896]: I0126 17:45:23.403800 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:23 crc kubenswrapper[4896]: I0126 17:45:23.404821 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6lrz" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="registry-server" containerID="cri-o://ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b" gracePeriod=2 Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.017897 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.040446 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksw9s\" (UniqueName: \"kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s\") pod \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.040603 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content\") pod \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.040716 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities\") pod \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\" (UID: \"602c7bb7-3ac2-4077-93e1-be1a367ac93b\") " Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.041563 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities" (OuterVolumeSpecName: "utilities") pod "602c7bb7-3ac2-4077-93e1-be1a367ac93b" (UID: "602c7bb7-3ac2-4077-93e1-be1a367ac93b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.067918 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s" (OuterVolumeSpecName: "kube-api-access-ksw9s") pod "602c7bb7-3ac2-4077-93e1-be1a367ac93b" (UID: "602c7bb7-3ac2-4077-93e1-be1a367ac93b"). InnerVolumeSpecName "kube-api-access-ksw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.081980 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602c7bb7-3ac2-4077-93e1-be1a367ac93b" (UID: "602c7bb7-3ac2-4077-93e1-be1a367ac93b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.144119 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksw9s\" (UniqueName: \"kubernetes.io/projected/602c7bb7-3ac2-4077-93e1-be1a367ac93b-kube-api-access-ksw9s\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.144170 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.144184 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602c7bb7-3ac2-4077-93e1-be1a367ac93b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.385276 4896 generic.go:334] "Generic (PLEG): container finished" podID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerID="ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b" exitCode=0 Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.385540 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerDied","Data":"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b"} Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.385679 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6lrz" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.385713 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6lrz" event={"ID":"602c7bb7-3ac2-4077-93e1-be1a367ac93b","Type":"ContainerDied","Data":"0111cca54ed28c7a751b20f71a335668a59005d76bcc26650467a5880e31b131"} Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.385777 4896 scope.go:117] "RemoveContainer" containerID="ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.424877 4896 scope.go:117] "RemoveContainer" containerID="4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.478304 4896 scope.go:117] "RemoveContainer" containerID="ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.490121 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.504910 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6lrz"] Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.539802 4896 scope.go:117] "RemoveContainer" containerID="ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b" Jan 26 17:45:24 crc kubenswrapper[4896]: E0126 17:45:24.541282 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b\": container with ID starting with ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b not found: ID does not exist" containerID="ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.541324 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b"} err="failed to get container status \"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b\": rpc error: code = NotFound desc = could not find container \"ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b\": container with ID starting with ab84536371e62f14730508934da689000a29d75cf629bdf40b8557166ca1339b not found: ID does not exist" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.541346 4896 scope.go:117] "RemoveContainer" containerID="4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060" Jan 26 17:45:24 crc kubenswrapper[4896]: E0126 17:45:24.542436 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060\": container with ID starting with 4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060 not found: ID does not exist" containerID="4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.542462 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060"} err="failed to get container status \"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060\": rpc error: code = NotFound desc = could not find container \"4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060\": container with ID starting with 4b1513f4edc77bb5eedf818de789375752d06e018f73c56a958debae5582b060 not found: ID does not exist" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.542478 4896 scope.go:117] "RemoveContainer" containerID="ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a" Jan 26 17:45:24 crc kubenswrapper[4896]: E0126 17:45:24.542961 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a\": container with ID starting with ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a not found: ID does not exist" containerID="ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.543019 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a"} err="failed to get container status \"ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a\": rpc error: code = NotFound desc = could not find container \"ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a\": container with ID starting with ea6c7064c007de7aa1b0f56a3cf1fb334b50cdc66a5317224f10ff21c16ed84a not found: ID does not exist" Jan 26 17:45:24 crc kubenswrapper[4896]: I0126 17:45:24.774522 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" path="/var/lib/kubelet/pods/602c7bb7-3ac2-4077-93e1-be1a367ac93b/volumes" Jan 26 17:45:25 crc kubenswrapper[4896]: I0126 17:45:25.807000 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/kube-rbac-proxy/0.log" Jan 26 17:45:25 crc kubenswrapper[4896]: I0126 17:45:25.925347 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/manager/1.log" Jan 26 17:45:25 crc kubenswrapper[4896]: I0126 17:45:25.945800 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6575bc9f47-rkmnv_dce71be2-915b-4c8e-9a4e-ebe6c278ddcf/manager/0.log" Jan 26 17:45:31 crc kubenswrapper[4896]: I0126 17:45:31.961295 4896 scope.go:117] "RemoveContainer" containerID="fef96db1a261eab85aa2f02da72f78f53889f64d0701636e4547197512427954" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.922520 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:21 crc kubenswrapper[4896]: E0126 17:46:21.923977 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="extract-content" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924030 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="extract-content" Jan 26 17:46:21 crc kubenswrapper[4896]: E0126 17:46:21.924061 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="extract-utilities" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924070 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="extract-utilities" Jan 26 17:46:21 crc kubenswrapper[4896]: E0126 17:46:21.924108 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" containerName="collect-profiles" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924116 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" containerName="collect-profiles" Jan 26 17:46:21 crc kubenswrapper[4896]: E0126 17:46:21.924167 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="registry-server" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924174 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="registry-server" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924514 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="602c7bb7-3ac2-4077-93e1-be1a367ac93b" containerName="registry-server" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.924532 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d331ef-4c54-4b17-8ed9-a4d56d3e8d26" containerName="collect-profiles" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.927504 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:21 crc kubenswrapper[4896]: I0126 17:46:21.976784 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.021347 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.021619 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.022073 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvz9c\" (UniqueName: \"kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.126928 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.127834 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.128348 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvz9c\" (UniqueName: \"kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.129191 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.129198 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.152712 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvz9c\" (UniqueName: \"kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c\") pod \"community-operators-x22bb\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.274514 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:22 crc kubenswrapper[4896]: I0126 17:46:22.920024 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:23 crc kubenswrapper[4896]: I0126 17:46:23.185211 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerStarted","Data":"ba764f924569fbf39d66944531eb433558a17f0b9d9d7881091b1d04a906529d"} Jan 26 17:46:24 crc kubenswrapper[4896]: I0126 17:46:24.201885 4896 generic.go:334] "Generic (PLEG): container finished" podID="2487ff57-953b-44cf-8a7f-954b061f587e" containerID="9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6" exitCode=0 Jan 26 17:46:24 crc kubenswrapper[4896]: I0126 17:46:24.201979 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerDied","Data":"9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6"} Jan 26 17:46:25 crc kubenswrapper[4896]: I0126 17:46:25.223782 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerStarted","Data":"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770"} Jan 26 17:46:26 crc kubenswrapper[4896]: I0126 17:46:26.237880 4896 generic.go:334] "Generic (PLEG): container finished" podID="2487ff57-953b-44cf-8a7f-954b061f587e" containerID="8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770" exitCode=0 Jan 26 17:46:26 crc kubenswrapper[4896]: I0126 17:46:26.238095 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerDied","Data":"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770"} Jan 26 17:46:27 crc kubenswrapper[4896]: I0126 17:46:27.255404 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerStarted","Data":"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05"} Jan 26 17:46:27 crc kubenswrapper[4896]: I0126 17:46:27.306236 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x22bb" podStartSLOduration=3.760087199 podStartE2EDuration="6.305896397s" podCreationTimestamp="2026-01-26 17:46:21 +0000 UTC" firstStartedPulling="2026-01-26 17:46:24.206477848 +0000 UTC m=+7941.988358241" lastFinishedPulling="2026-01-26 17:46:26.752287046 +0000 UTC m=+7944.534167439" observedRunningTime="2026-01-26 17:46:27.282743022 +0000 UTC m=+7945.064623415" watchObservedRunningTime="2026-01-26 17:46:27.305896397 +0000 UTC m=+7945.087776790" Jan 26 17:46:32 crc kubenswrapper[4896]: I0126 17:46:32.275551 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:32 crc kubenswrapper[4896]: I0126 17:46:32.277791 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:32 crc kubenswrapper[4896]: I0126 17:46:32.333367 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:33 crc kubenswrapper[4896]: I0126 17:46:33.397426 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:33 crc kubenswrapper[4896]: I0126 17:46:33.469949 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:35 crc kubenswrapper[4896]: I0126 17:46:35.371043 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x22bb" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="registry-server" containerID="cri-o://e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05" gracePeriod=2 Jan 26 17:46:35 crc kubenswrapper[4896]: I0126 17:46:35.965364 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.072637 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities\") pod \"2487ff57-953b-44cf-8a7f-954b061f587e\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.073102 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvz9c\" (UniqueName: \"kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c\") pod \"2487ff57-953b-44cf-8a7f-954b061f587e\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.073220 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content\") pod \"2487ff57-953b-44cf-8a7f-954b061f587e\" (UID: \"2487ff57-953b-44cf-8a7f-954b061f587e\") " Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.073908 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities" (OuterVolumeSpecName: "utilities") pod "2487ff57-953b-44cf-8a7f-954b061f587e" (UID: "2487ff57-953b-44cf-8a7f-954b061f587e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.076304 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.081013 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c" (OuterVolumeSpecName: "kube-api-access-cvz9c") pod "2487ff57-953b-44cf-8a7f-954b061f587e" (UID: "2487ff57-953b-44cf-8a7f-954b061f587e"). InnerVolumeSpecName "kube-api-access-cvz9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.145862 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2487ff57-953b-44cf-8a7f-954b061f587e" (UID: "2487ff57-953b-44cf-8a7f-954b061f587e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.179661 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvz9c\" (UniqueName: \"kubernetes.io/projected/2487ff57-953b-44cf-8a7f-954b061f587e-kube-api-access-cvz9c\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.180038 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2487ff57-953b-44cf-8a7f-954b061f587e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.386208 4896 generic.go:334] "Generic (PLEG): container finished" podID="2487ff57-953b-44cf-8a7f-954b061f587e" containerID="e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05" exitCode=0 Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.386303 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x22bb" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.387456 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerDied","Data":"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05"} Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.387628 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x22bb" event={"ID":"2487ff57-953b-44cf-8a7f-954b061f587e","Type":"ContainerDied","Data":"ba764f924569fbf39d66944531eb433558a17f0b9d9d7881091b1d04a906529d"} Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.387712 4896 scope.go:117] "RemoveContainer" containerID="e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.417406 4896 scope.go:117] "RemoveContainer" containerID="8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.435572 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.450559 4896 scope.go:117] "RemoveContainer" containerID="9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.453945 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x22bb"] Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.508904 4896 scope.go:117] "RemoveContainer" containerID="e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05" Jan 26 17:46:36 crc kubenswrapper[4896]: E0126 17:46:36.509422 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05\": container with ID starting with e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05 not found: ID does not exist" containerID="e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.509464 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05"} err="failed to get container status \"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05\": rpc error: code = NotFound desc = could not find container \"e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05\": container with ID starting with e86c74aadf9c002560ab378151e121234496d28678be77092d8008e5f230bf05 not found: ID does not exist" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.509494 4896 scope.go:117] "RemoveContainer" containerID="8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770" Jan 26 17:46:36 crc kubenswrapper[4896]: E0126 17:46:36.510554 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770\": container with ID starting with 8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770 not found: ID does not exist" containerID="8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.510665 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770"} err="failed to get container status \"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770\": rpc error: code = NotFound desc = could not find container \"8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770\": container with ID starting with 8614a884bc6d6f7cbf4d6e318bcfca042df54eb435aa3c289203c90d75494770 not found: ID does not exist" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.510692 4896 scope.go:117] "RemoveContainer" containerID="9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6" Jan 26 17:46:36 crc kubenswrapper[4896]: E0126 17:46:36.510951 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6\": container with ID starting with 9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6 not found: ID does not exist" containerID="9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.510976 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6"} err="failed to get container status \"9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6\": rpc error: code = NotFound desc = could not find container \"9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6\": container with ID starting with 9d49f263a225d1c2ac0b9a4b79a95ec2263b3c3b2fa4bd71822e31ed706113f6 not found: ID does not exist" Jan 26 17:46:36 crc kubenswrapper[4896]: I0126 17:46:36.790546 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" path="/var/lib/kubelet/pods/2487ff57-953b-44cf-8a7f-954b061f587e/volumes" Jan 26 17:46:48 crc kubenswrapper[4896]: I0126 17:46:48.813909 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:46:48 crc kubenswrapper[4896]: I0126 17:46:48.814422 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:47:18 crc kubenswrapper[4896]: I0126 17:47:18.813178 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:47:18 crc kubenswrapper[4896]: I0126 17:47:18.813796 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.517438 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:23 crc kubenswrapper[4896]: E0126 17:47:23.518499 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="registry-server" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.518514 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="registry-server" Jan 26 17:47:23 crc kubenswrapper[4896]: E0126 17:47:23.518540 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="extract-utilities" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.518546 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="extract-utilities" Jan 26 17:47:23 crc kubenswrapper[4896]: E0126 17:47:23.518613 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="extract-content" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.518620 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="extract-content" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.518883 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="2487ff57-953b-44cf-8a7f-954b061f587e" containerName="registry-server" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.520758 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.589376 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.667818 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.668424 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.668595 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5xm6\" (UniqueName: \"kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.772203 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.772275 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.772311 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5xm6\" (UniqueName: \"kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.773242 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.773248 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.795456 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5xm6\" (UniqueName: \"kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6\") pod \"certified-operators-d9grr\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:23 crc kubenswrapper[4896]: I0126 17:47:23.842111 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:24 crc kubenswrapper[4896]: I0126 17:47:24.542564 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:25 crc kubenswrapper[4896]: I0126 17:47:25.229319 4896 generic.go:334] "Generic (PLEG): container finished" podID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerID="f2614c94df347b93e1100a67cf2d94b8a5c19fdac20ebb43d8e8974eb7bccfa7" exitCode=0 Jan 26 17:47:25 crc kubenswrapper[4896]: I0126 17:47:25.229661 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerDied","Data":"f2614c94df347b93e1100a67cf2d94b8a5c19fdac20ebb43d8e8974eb7bccfa7"} Jan 26 17:47:25 crc kubenswrapper[4896]: I0126 17:47:25.229693 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerStarted","Data":"14549c634b93a1db1e3e9219a772ed0f190d2df5ec1a264c8811366da3cc1d82"} Jan 26 17:47:27 crc kubenswrapper[4896]: I0126 17:47:27.263956 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerStarted","Data":"7827efcd4f942ee00edf525e0423063267cc3c213a3a37c50012855a842bb789"} Jan 26 17:47:28 crc kubenswrapper[4896]: I0126 17:47:28.281398 4896 generic.go:334] "Generic (PLEG): container finished" podID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerID="7827efcd4f942ee00edf525e0423063267cc3c213a3a37c50012855a842bb789" exitCode=0 Jan 26 17:47:28 crc kubenswrapper[4896]: I0126 17:47:28.281478 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerDied","Data":"7827efcd4f942ee00edf525e0423063267cc3c213a3a37c50012855a842bb789"} Jan 26 17:47:29 crc kubenswrapper[4896]: I0126 17:47:29.293881 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerStarted","Data":"94723d19d6f31d0e51e0fb1ed9e0270898b690afdabc3af94108063645f3871d"} Jan 26 17:47:29 crc kubenswrapper[4896]: I0126 17:47:29.327034 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9grr" podStartSLOduration=2.829063676 podStartE2EDuration="6.327013066s" podCreationTimestamp="2026-01-26 17:47:23 +0000 UTC" firstStartedPulling="2026-01-26 17:47:25.2314423 +0000 UTC m=+8003.013322693" lastFinishedPulling="2026-01-26 17:47:28.72939169 +0000 UTC m=+8006.511272083" observedRunningTime="2026-01-26 17:47:29.317135895 +0000 UTC m=+8007.099016288" watchObservedRunningTime="2026-01-26 17:47:29.327013066 +0000 UTC m=+8007.108893449" Jan 26 17:47:32 crc kubenswrapper[4896]: I0126 17:47:32.189744 4896 scope.go:117] "RemoveContainer" containerID="80f4ebeb240506c2eedce506ff7215c95ca86b963932dd3af57c415e0a784ae5" Jan 26 17:47:33 crc kubenswrapper[4896]: I0126 17:47:33.843117 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:33 crc kubenswrapper[4896]: I0126 17:47:33.844817 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:33 crc kubenswrapper[4896]: I0126 17:47:33.913352 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:34 crc kubenswrapper[4896]: I0126 17:47:34.404964 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:34 crc kubenswrapper[4896]: I0126 17:47:34.475462 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:36 crc kubenswrapper[4896]: I0126 17:47:36.367978 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9grr" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="registry-server" containerID="cri-o://94723d19d6f31d0e51e0fb1ed9e0270898b690afdabc3af94108063645f3871d" gracePeriod=2 Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.410400 4896 generic.go:334] "Generic (PLEG): container finished" podID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerID="94723d19d6f31d0e51e0fb1ed9e0270898b690afdabc3af94108063645f3871d" exitCode=0 Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.410799 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerDied","Data":"94723d19d6f31d0e51e0fb1ed9e0270898b690afdabc3af94108063645f3871d"} Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.557286 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.660360 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5xm6\" (UniqueName: \"kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6\") pod \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.661021 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content\") pod \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.661268 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities\") pod \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\" (UID: \"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b\") " Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.662038 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities" (OuterVolumeSpecName: "utilities") pod "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" (UID: "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.680461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6" (OuterVolumeSpecName: "kube-api-access-x5xm6") pod "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" (UID: "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b"). InnerVolumeSpecName "kube-api-access-x5xm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.709153 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" (UID: "6f44baf4-f9da-4d04-9e6f-6bfedc90df8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.764111 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.764153 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:37 crc kubenswrapper[4896]: I0126 17:47:37.764195 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5xm6\" (UniqueName: \"kubernetes.io/projected/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b-kube-api-access-x5xm6\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.425898 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9grr" event={"ID":"6f44baf4-f9da-4d04-9e6f-6bfedc90df8b","Type":"ContainerDied","Data":"14549c634b93a1db1e3e9219a772ed0f190d2df5ec1a264c8811366da3cc1d82"} Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.426150 4896 scope.go:117] "RemoveContainer" containerID="94723d19d6f31d0e51e0fb1ed9e0270898b690afdabc3af94108063645f3871d" Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.425997 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9grr" Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.452347 4896 scope.go:117] "RemoveContainer" containerID="7827efcd4f942ee00edf525e0423063267cc3c213a3a37c50012855a842bb789" Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.480462 4896 scope.go:117] "RemoveContainer" containerID="f2614c94df347b93e1100a67cf2d94b8a5c19fdac20ebb43d8e8974eb7bccfa7" Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.483363 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.496728 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9grr"] Jan 26 17:47:38 crc kubenswrapper[4896]: I0126 17:47:38.773044 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" path="/var/lib/kubelet/pods/6f44baf4-f9da-4d04-9e6f-6bfedc90df8b/volumes" Jan 26 17:47:48 crc kubenswrapper[4896]: I0126 17:47:48.814360 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:47:48 crc kubenswrapper[4896]: I0126 17:47:48.814941 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:47:48 crc kubenswrapper[4896]: I0126 17:47:48.815003 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:47:48 crc kubenswrapper[4896]: I0126 17:47:48.816110 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:47:48 crc kubenswrapper[4896]: I0126 17:47:48.816183 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d" gracePeriod=600 Jan 26 17:47:49 crc kubenswrapper[4896]: I0126 17:47:49.629526 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d" exitCode=0 Jan 26 17:47:49 crc kubenswrapper[4896]: I0126 17:47:49.629619 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d"} Jan 26 17:47:49 crc kubenswrapper[4896]: I0126 17:47:49.630189 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerStarted","Data":"f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964"} Jan 26 17:47:49 crc kubenswrapper[4896]: I0126 17:47:49.630226 4896 scope.go:117] "RemoveContainer" containerID="2b2d87da80f85568d27958abdc15695c3a62bbace342b5d1dfaf284f7e6a5bca" Jan 26 17:47:59 crc kubenswrapper[4896]: I0126 17:47:59.754422 4896 generic.go:334] "Generic (PLEG): container finished" podID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerID="bbc92e0ab9f65fdd5508141481c9845888013d808145dc392f8c46928c165b76" exitCode=0 Jan 26 17:47:59 crc kubenswrapper[4896]: I0126 17:47:59.754565 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" event={"ID":"d5fa672b-48fc-4b57-80ce-92eb620c3615","Type":"ContainerDied","Data":"bbc92e0ab9f65fdd5508141481c9845888013d808145dc392f8c46928c165b76"} Jan 26 17:47:59 crc kubenswrapper[4896]: I0126 17:47:59.755966 4896 scope.go:117] "RemoveContainer" containerID="bbc92e0ab9f65fdd5508141481c9845888013d808145dc392f8c46928c165b76" Jan 26 17:48:00 crc kubenswrapper[4896]: I0126 17:48:00.113875 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5rqvl_must-gather-hf4bl_d5fa672b-48fc-4b57-80ce-92eb620c3615/gather/0.log" Jan 26 17:48:09 crc kubenswrapper[4896]: I0126 17:48:09.836827 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5rqvl/must-gather-hf4bl"] Jan 26 17:48:09 crc kubenswrapper[4896]: I0126 17:48:09.837725 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="copy" containerID="cri-o://b505967839e4d9e9f3bb02619a76f87b2b3298fb8f9baa74703235cd9403d3b2" gracePeriod=2 Jan 26 17:48:09 crc kubenswrapper[4896]: I0126 17:48:09.847951 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5rqvl/must-gather-hf4bl"] Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.125993 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5rqvl_must-gather-hf4bl_d5fa672b-48fc-4b57-80ce-92eb620c3615/copy/0.log" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.132337 4896 generic.go:334] "Generic (PLEG): container finished" podID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerID="b505967839e4d9e9f3bb02619a76f87b2b3298fb8f9baa74703235cd9403d3b2" exitCode=143 Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.560690 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5rqvl_must-gather-hf4bl_d5fa672b-48fc-4b57-80ce-92eb620c3615/copy/0.log" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.561230 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.665417 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output\") pod \"d5fa672b-48fc-4b57-80ce-92eb620c3615\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.665924 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd65j\" (UniqueName: \"kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j\") pod \"d5fa672b-48fc-4b57-80ce-92eb620c3615\" (UID: \"d5fa672b-48fc-4b57-80ce-92eb620c3615\") " Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.679461 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j" (OuterVolumeSpecName: "kube-api-access-vd65j") pod "d5fa672b-48fc-4b57-80ce-92eb620c3615" (UID: "d5fa672b-48fc-4b57-80ce-92eb620c3615"). InnerVolumeSpecName "kube-api-access-vd65j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.772561 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd65j\" (UniqueName: \"kubernetes.io/projected/d5fa672b-48fc-4b57-80ce-92eb620c3615-kube-api-access-vd65j\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.858823 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5fa672b-48fc-4b57-80ce-92eb620c3615" (UID: "d5fa672b-48fc-4b57-80ce-92eb620c3615"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:48:10 crc kubenswrapper[4896]: I0126 17:48:10.875400 4896 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5fa672b-48fc-4b57-80ce-92eb620c3615-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 17:48:11 crc kubenswrapper[4896]: I0126 17:48:11.145780 4896 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5rqvl_must-gather-hf4bl_d5fa672b-48fc-4b57-80ce-92eb620c3615/copy/0.log" Jan 26 17:48:11 crc kubenswrapper[4896]: I0126 17:48:11.146404 4896 scope.go:117] "RemoveContainer" containerID="b505967839e4d9e9f3bb02619a76f87b2b3298fb8f9baa74703235cd9403d3b2" Jan 26 17:48:11 crc kubenswrapper[4896]: I0126 17:48:11.146551 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5rqvl/must-gather-hf4bl" Jan 26 17:48:11 crc kubenswrapper[4896]: I0126 17:48:11.220242 4896 scope.go:117] "RemoveContainer" containerID="bbc92e0ab9f65fdd5508141481c9845888013d808145dc392f8c46928c165b76" Jan 26 17:48:12 crc kubenswrapper[4896]: I0126 17:48:12.850862 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" path="/var/lib/kubelet/pods/d5fa672b-48fc-4b57-80ce-92eb620c3615/volumes" Jan 26 17:50:18 crc kubenswrapper[4896]: I0126 17:50:18.813924 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:50:18 crc kubenswrapper[4896]: I0126 17:50:18.815864 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:50:48 crc kubenswrapper[4896]: I0126 17:50:48.814060 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:50:48 crc kubenswrapper[4896]: I0126 17:50:48.814717 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:51:18 crc kubenswrapper[4896]: I0126 17:51:18.814137 4896 patch_prober.go:28] interesting pod/machine-config-daemon-nrqhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:51:18 crc kubenswrapper[4896]: I0126 17:51:18.814572 4896 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:51:18 crc kubenswrapper[4896]: I0126 17:51:18.814642 4896 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" Jan 26 17:51:18 crc kubenswrapper[4896]: I0126 17:51:18.815536 4896 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964"} pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:51:18 crc kubenswrapper[4896]: I0126 17:51:18.815642 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerName="machine-config-daemon" containerID="cri-o://f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" gracePeriod=600 Jan 26 17:51:18 crc kubenswrapper[4896]: E0126 17:51:18.975159 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:51:19 crc kubenswrapper[4896]: I0126 17:51:19.929895 4896 generic.go:334] "Generic (PLEG): container finished" podID="0eae0e2b-9d04-4999-b78c-c70aeee09235" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" exitCode=0 Jan 26 17:51:19 crc kubenswrapper[4896]: I0126 17:51:19.929980 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" event={"ID":"0eae0e2b-9d04-4999-b78c-c70aeee09235","Type":"ContainerDied","Data":"f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964"} Jan 26 17:51:19 crc kubenswrapper[4896]: I0126 17:51:19.930263 4896 scope.go:117] "RemoveContainer" containerID="31a6c6134e392a9f99f978abe5c40d87fa47f2d01f69dac0107ebc9d8027616d" Jan 26 17:51:19 crc kubenswrapper[4896]: I0126 17:51:19.931293 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:51:19 crc kubenswrapper[4896]: E0126 17:51:19.931754 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:51:33 crc kubenswrapper[4896]: I0126 17:51:33.761134 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:51:33 crc kubenswrapper[4896]: E0126 17:51:33.762307 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:51:45 crc kubenswrapper[4896]: I0126 17:51:45.760321 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:51:45 crc kubenswrapper[4896]: E0126 17:51:45.761036 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:51:57 crc kubenswrapper[4896]: I0126 17:51:57.760530 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:51:57 crc kubenswrapper[4896]: E0126 17:51:57.761498 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:52:12 crc kubenswrapper[4896]: I0126 17:52:12.775731 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:52:12 crc kubenswrapper[4896]: E0126 17:52:12.776656 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.447394 4896 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:22 crc kubenswrapper[4896]: E0126 17:52:22.448745 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="extract-utilities" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.448778 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="extract-utilities" Jan 26 17:52:22 crc kubenswrapper[4896]: E0126 17:52:22.448809 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="copy" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.448818 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="copy" Jan 26 17:52:22 crc kubenswrapper[4896]: E0126 17:52:22.448839 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="gather" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.448847 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="gather" Jan 26 17:52:22 crc kubenswrapper[4896]: E0126 17:52:22.448878 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="extract-content" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.448887 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="extract-content" Jan 26 17:52:22 crc kubenswrapper[4896]: E0126 17:52:22.448927 4896 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="registry-server" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.448935 4896 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="registry-server" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.449332 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f44baf4-f9da-4d04-9e6f-6bfedc90df8b" containerName="registry-server" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.449363 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="gather" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.449390 4896 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fa672b-48fc-4b57-80ce-92eb620c3615" containerName="copy" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.452851 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.471753 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.582712 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.583069 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7zm\" (UniqueName: \"kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.583130 4896 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.685446 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.685620 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7zm\" (UniqueName: \"kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.685654 4896 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.686767 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.687181 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.715606 4896 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7zm\" (UniqueName: \"kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm\") pod \"redhat-operators-6w2z4\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:22 crc kubenswrapper[4896]: I0126 17:52:22.789947 4896 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:23 crc kubenswrapper[4896]: I0126 17:52:23.413432 4896 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:23 crc kubenswrapper[4896]: I0126 17:52:23.695327 4896 generic.go:334] "Generic (PLEG): container finished" podID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerID="10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0" exitCode=0 Jan 26 17:52:23 crc kubenswrapper[4896]: I0126 17:52:23.695382 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerDied","Data":"10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0"} Jan 26 17:52:23 crc kubenswrapper[4896]: I0126 17:52:23.695653 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerStarted","Data":"d0b19bf78b4758e9fcf9e2763fc4ded40f329aedde99ec864bdd135cef9bb552"} Jan 26 17:52:23 crc kubenswrapper[4896]: I0126 17:52:23.699939 4896 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:52:24 crc kubenswrapper[4896]: I0126 17:52:24.707746 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerStarted","Data":"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d"} Jan 26 17:52:24 crc kubenswrapper[4896]: I0126 17:52:24.760386 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:52:24 crc kubenswrapper[4896]: E0126 17:52:24.760780 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:52:28 crc kubenswrapper[4896]: I0126 17:52:28.752769 4896 generic.go:334] "Generic (PLEG): container finished" podID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerID="1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d" exitCode=0 Jan 26 17:52:28 crc kubenswrapper[4896]: I0126 17:52:28.752832 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerDied","Data":"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d"} Jan 26 17:52:29 crc kubenswrapper[4896]: I0126 17:52:29.774360 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerStarted","Data":"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e"} Jan 26 17:52:29 crc kubenswrapper[4896]: I0126 17:52:29.801944 4896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6w2z4" podStartSLOduration=2.335253335 podStartE2EDuration="7.801914463s" podCreationTimestamp="2026-01-26 17:52:22 +0000 UTC" firstStartedPulling="2026-01-26 17:52:23.697328497 +0000 UTC m=+8301.479208890" lastFinishedPulling="2026-01-26 17:52:29.163989625 +0000 UTC m=+8306.945870018" observedRunningTime="2026-01-26 17:52:29.795199978 +0000 UTC m=+8307.577080371" watchObservedRunningTime="2026-01-26 17:52:29.801914463 +0000 UTC m=+8307.583794856" Jan 26 17:52:32 crc kubenswrapper[4896]: I0126 17:52:32.791021 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:32 crc kubenswrapper[4896]: I0126 17:52:32.791264 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:33 crc kubenswrapper[4896]: I0126 17:52:33.846672 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6w2z4" podUID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerName="registry-server" probeResult="failure" output=< Jan 26 17:52:33 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:52:33 crc kubenswrapper[4896]: > Jan 26 17:52:36 crc kubenswrapper[4896]: I0126 17:52:36.761462 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:52:36 crc kubenswrapper[4896]: E0126 17:52:36.762646 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:52:43 crc kubenswrapper[4896]: I0126 17:52:43.854781 4896 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6w2z4" podUID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerName="registry-server" probeResult="failure" output=< Jan 26 17:52:43 crc kubenswrapper[4896]: timeout: failed to connect service ":50051" within 1s Jan 26 17:52:43 crc kubenswrapper[4896]: > Jan 26 17:52:47 crc kubenswrapper[4896]: I0126 17:52:47.759820 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:52:47 crc kubenswrapper[4896]: E0126 17:52:47.760796 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:52:52 crc kubenswrapper[4896]: I0126 17:52:52.856167 4896 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:52 crc kubenswrapper[4896]: I0126 17:52:52.925053 4896 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:53 crc kubenswrapper[4896]: I0126 17:52:53.645074 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.062884 4896 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6w2z4" podUID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerName="registry-server" containerID="cri-o://82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e" gracePeriod=2 Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.692161 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.840198 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities\") pod \"d46e7586-aae6-4125-9af6-aec961bb0dd6\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.840408 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content\") pod \"d46e7586-aae6-4125-9af6-aec961bb0dd6\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.840590 4896 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7zm\" (UniqueName: \"kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm\") pod \"d46e7586-aae6-4125-9af6-aec961bb0dd6\" (UID: \"d46e7586-aae6-4125-9af6-aec961bb0dd6\") " Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.841185 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities" (OuterVolumeSpecName: "utilities") pod "d46e7586-aae6-4125-9af6-aec961bb0dd6" (UID: "d46e7586-aae6-4125-9af6-aec961bb0dd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.848975 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm" (OuterVolumeSpecName: "kube-api-access-jc7zm") pod "d46e7586-aae6-4125-9af6-aec961bb0dd6" (UID: "d46e7586-aae6-4125-9af6-aec961bb0dd6"). InnerVolumeSpecName "kube-api-access-jc7zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.944719 4896 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7zm\" (UniqueName: \"kubernetes.io/projected/d46e7586-aae6-4125-9af6-aec961bb0dd6-kube-api-access-jc7zm\") on node \"crc\" DevicePath \"\"" Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.944751 4896 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:52:54 crc kubenswrapper[4896]: I0126 17:52:54.959323 4896 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d46e7586-aae6-4125-9af6-aec961bb0dd6" (UID: "d46e7586-aae6-4125-9af6-aec961bb0dd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.052518 4896 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46e7586-aae6-4125-9af6-aec961bb0dd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.080424 4896 generic.go:334] "Generic (PLEG): container finished" podID="d46e7586-aae6-4125-9af6-aec961bb0dd6" containerID="82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e" exitCode=0 Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.080486 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerDied","Data":"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e"} Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.080550 4896 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w2z4" event={"ID":"d46e7586-aae6-4125-9af6-aec961bb0dd6","Type":"ContainerDied","Data":"d0b19bf78b4758e9fcf9e2763fc4ded40f329aedde99ec864bdd135cef9bb552"} Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.080591 4896 scope.go:117] "RemoveContainer" containerID="82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.080489 4896 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w2z4" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.119077 4896 scope.go:117] "RemoveContainer" containerID="1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.151049 4896 scope.go:117] "RemoveContainer" containerID="10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.158370 4896 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.172553 4896 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6w2z4"] Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.235008 4896 scope.go:117] "RemoveContainer" containerID="82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e" Jan 26 17:52:55 crc kubenswrapper[4896]: E0126 17:52:55.238849 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e\": container with ID starting with 82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e not found: ID does not exist" containerID="82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.238911 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e"} err="failed to get container status \"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e\": rpc error: code = NotFound desc = could not find container \"82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e\": container with ID starting with 82f2f02eaf0532d5939f8bc857ba9c2a480806e7803f7a6343e21a581f30b65e not found: ID does not exist" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.238946 4896 scope.go:117] "RemoveContainer" containerID="1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d" Jan 26 17:52:55 crc kubenswrapper[4896]: E0126 17:52:55.239531 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d\": container with ID starting with 1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d not found: ID does not exist" containerID="1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.239569 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d"} err="failed to get container status \"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d\": rpc error: code = NotFound desc = could not find container \"1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d\": container with ID starting with 1752ccb9e486ad3fd09b6841c8b3364491858d73476e79a81a18e8a8c5b1d39d not found: ID does not exist" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.239634 4896 scope.go:117] "RemoveContainer" containerID="10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0" Jan 26 17:52:55 crc kubenswrapper[4896]: E0126 17:52:55.239891 4896 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0\": container with ID starting with 10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0 not found: ID does not exist" containerID="10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0" Jan 26 17:52:55 crc kubenswrapper[4896]: I0126 17:52:55.239919 4896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0"} err="failed to get container status \"10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0\": rpc error: code = NotFound desc = could not find container \"10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0\": container with ID starting with 10d7e5dbe7fa2bd0eba6ec1381000391351deebd4b203d972b126401087b60a0 not found: ID does not exist" Jan 26 17:52:55 crc kubenswrapper[4896]: E0126 17:52:55.313871 4896 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46e7586_aae6_4125_9af6_aec961bb0dd6.slice/crio-d0b19bf78b4758e9fcf9e2763fc4ded40f329aedde99ec864bdd135cef9bb552\": RecentStats: unable to find data in memory cache]" Jan 26 17:52:56 crc kubenswrapper[4896]: I0126 17:52:56.774649 4896 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46e7586-aae6-4125-9af6-aec961bb0dd6" path="/var/lib/kubelet/pods/d46e7586-aae6-4125-9af6-aec961bb0dd6/volumes" Jan 26 17:53:01 crc kubenswrapper[4896]: I0126 17:53:01.767927 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:53:01 crc kubenswrapper[4896]: E0126 17:53:01.768842 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:53:16 crc kubenswrapper[4896]: I0126 17:53:16.762455 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:53:16 crc kubenswrapper[4896]: E0126 17:53:16.763517 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:53:27 crc kubenswrapper[4896]: I0126 17:53:27.759726 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:53:27 crc kubenswrapper[4896]: E0126 17:53:27.760426 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:53:40 crc kubenswrapper[4896]: I0126 17:53:40.760287 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:53:40 crc kubenswrapper[4896]: E0126 17:53:40.761442 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:53:52 crc kubenswrapper[4896]: I0126 17:53:52.769363 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:53:52 crc kubenswrapper[4896]: E0126 17:53:52.770325 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:54:04 crc kubenswrapper[4896]: I0126 17:54:04.761038 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:54:04 crc kubenswrapper[4896]: E0126 17:54:04.761921 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:54:15 crc kubenswrapper[4896]: I0126 17:54:15.760162 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:54:15 crc kubenswrapper[4896]: E0126 17:54:15.761942 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:54:30 crc kubenswrapper[4896]: I0126 17:54:30.760478 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:54:30 crc kubenswrapper[4896]: E0126 17:54:30.761380 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:54:44 crc kubenswrapper[4896]: I0126 17:54:44.759626 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:54:44 crc kubenswrapper[4896]: E0126 17:54:44.760476 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:54:56 crc kubenswrapper[4896]: I0126 17:54:56.759821 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:54:56 crc kubenswrapper[4896]: E0126 17:54:56.761737 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:55:07 crc kubenswrapper[4896]: I0126 17:55:07.759330 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:55:07 crc kubenswrapper[4896]: E0126 17:55:07.761799 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:55:18 crc kubenswrapper[4896]: I0126 17:55:18.759910 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:55:18 crc kubenswrapper[4896]: E0126 17:55:18.761098 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" Jan 26 17:55:30 crc kubenswrapper[4896]: I0126 17:55:30.759709 4896 scope.go:117] "RemoveContainer" containerID="f0fb4b3415b759757299f9573c5416274f670a3f495b71e315338a007cc6b964" Jan 26 17:55:30 crc kubenswrapper[4896]: E0126 17:55:30.761105 4896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nrqhw_openshift-machine-config-operator(0eae0e2b-9d04-4999-b78c-c70aeee09235)\"" pod="openshift-machine-config-operator/machine-config-daemon-nrqhw" podUID="0eae0e2b-9d04-4999-b78c-c70aeee09235" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135725240024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135725240017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135704171016511 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135704171015461 5ustar corecore